PreprintPDF Available

Abstract and Figures

Objects recognition is a necessary task in smart city environments. This recognition can be used in processes such as the reconstruction of the environment map or the intelligent navigation of vehicles. This paper proposes an architecture that integrates heterogeneous distributed information to recognize objects in intelligent environments. The architecture is based on the IoT / Industry 4.0 model to interconnect the devices, called Smart Resources. Smart Resources can process local sensor data and send information to other devices. These other devices can be located in the same operating range, the Edge, in the same intranet, the Fog, or on the Internet, the Cloud. Smart Resources must have an intelligent layer in order to be able to process the information. A system with two Smart Resources equipped with different image sensors has been implemented to validate the architecture. Experiments show that the integration of information increases the certainty in the recognition of objects between 2\% and 4\%. Consequently, in the field of intelligent environments, it seems appropriate to provide the devices with intelligence, but also capabilities to collaborate closely with other devices.
Content may be subject to copyright.
Article
Smart Resources for Smart Cities: Distributed
Architecture to Integrate Sensor Information
Jose-Luis Poza-Lujan 1,†* , Juan-Luis Posadas-Yagüe 1,† ,Jose E. Simó1,† and Francisco Blanes
1,†
1
University Institute of Control Systems and Industrial Computing (ai2). Universitat Politècnica de València
(UPV) Camino de Vera, s/n. 46022 Valencia (Spain); jopolu@ai2.upv.es, jposadas@ai2.upv.es,
jsimo@ai2.upv.es, fblanes@ai2.upv.es
1
2
3
4
5
6
7
8
9
10
11
12
13
* Correspondence: jopolu@ai2.upv.es; Tel.: +34-963877000 (J.P.)
These authors contributed equally to this work.
Abstract:
Objects recognition is a necessary task in smart city environments. This recognition can be
used in processes such as the reconstruction of the environment map or the intelligent navigation of
vehicles. This paper proposes an architecture that integrates heterogeneous distributed information
to recognize objects in intelligent environments. The architecture is based on the IoT / Industry 4.0
model to interconnect the devices, called Smart Resources. Smart Resources can process local sensor
data and send information to other devices. These other devices can be located in the same operating
range, the Edge, in the same intranet, the Fog, or on the Internet, the Cloud. Smart Resources must
have an intelligent layer in order to be able to process the information. A system with two Smart
Resources equipped with different image sensors has been implemented to validate the architecture.
Experiments show that the integration of information increases the certainty in the recognition of
objects between 2% and 4%. Consequently, in intelligent environments, it seems appropriate to
provide the devices with intelligence, but also capabilities to collaborate closely with other devices.
Keywords:
Smart Environment, Smart Sensors, Distributed Architectures, Object Detection,
Information Integration
14
1. Introduction15
The growth of cities has given rise to an environment populated by increasingly intelligent
16
and more connected devices. Practically all of them have sensors and actuators with very different
17
capacities.18
Classically, each one of these devices with sensors and actuators can be considered as a control
19
node. However, because of devices are connected between them and they can communicate to
20
share their resources, the concept of control node can change to the concept of intelligent resource
21
that provides services to the rest of the devices [
1
]. In addition, heterogeneous devices provide
22
complementary information that can be used to enrich the knowledge. Consequently, a distributed
23
system in which the devices are intelligent can be defined as an intelligent system or Smart System
24
[
2
]. Smart systems perform their tasks in dynamic environments with multiple features and changing
25
conditions. Urban environments are an example of dynamic systems, not predictable, in order to apply
26
intelligent systems.27
Therefore, the continuous and accurate knowledge of the environment is necessary to provide
28
autonomy and interact. Subsystems such as robots or vehicles navigation need to know the
29
environment to perform their tasks such as the planning of trajectories or the execution of missions
30
[
3
]. Most urban systems are based on devices with many sensors. A device with many sensors may
31
Preprints (www.preprints.org) | NOT PEER-REVIEWED | Posted: 4 November 2019 doi:10.20944/preprints201911.0035.v1
© 2019 by the author(s). Distributed under a Creative Commons CC BY license.
Peer-reviewed version available at Sensors 2019, 20, 112; doi:10.3390/s20010112
2 of 16
be overloaded, or its sensors can be used very sporadically, with the consequent loss of efficiency. To
32
reduce the load on each device, it is important to be able to exchange sensory information between
33
them. For this, devices have to communicate not only with the devices of their own semantic level, but
34
with any element of the system. It is for this reason that devices in smart cities could be decoupled
35
from any hierarchy. Consequently, smart city architectures can be organised according to the amount of
36
information and frequency of exchange of such information. Therefore, interacting at the information
37
level instead of at the sensor level facilitates the intelligence of the system.38
The fact that the elements of the system can exchange information between them, decoupling
39
their location in the hierarchy of the architecture, has led to the emergence of new paradigms (Figure
40
1). These paradigms and standard architectures organise the components depending on dimensions
41
according to the amount of data with which they work, to the geographical scope, or to the immediacy
42
(real time) of the messages and responses of control [
4
]. Based on the dimensions proposed in [
4
],
43
the Industry 4.0 or Internet of Things (IoT) models stand out as very suitable for the design of any
44
system that provides intelligence to a city. The Reference Architectural Model Industry 4.0 (RAMI 4.0)
45
[
5
] is based on "smart products", which the IoT [
6
] model places in a layer called "Edge". Examples
46
of these products range from robots in an assembly line to smart street lamps that optimise energy
47
consumption. These devices have a scope of sensorization and action of the order of a few meters, and
48
a very fast reaction. When several devices of the Edge level communicate or interact on a wider level,
49
spatially or temporarily, it is called the smart factory or platform level. For example, when monitoring
50
the performance of all robots in an assembly line or when managing the lighting of an entire city.
51
Finally, when the devices are connected to exchange large amounts of data or there is a longer-term
52
reaction, it is called a connected world, the business level or the well-known concept of "cloud". The
53
control architectures of smart cities fit perfectly into these models [7].54
Figure 1. Architecture models in smart cities
The paradigms shown in Figure 1coincide in organizing the elements according to the level of
55
information or the frequency of access, rather than a dependence on the physical or logical topology.
56
Therefore, this article proposes a collaboration model for object recognition whose location and models
57
are in the Edge layer.58
The recognition of objects is one of the usual functionalities required by the elements of an
59
intelligent city. For example, vehicles need to recognise traffic signals for autonomous driving [
8
].
60
Other applications of the detection of objects in cities are the detection of people [
9
] or the detection of
61
vehicles [10], generally to improve road safety or the comfort of citizens.62
Preprints (www.preprints.org) | NOT PEER-REVIEWED | Posted: 4 November 2019 doi:10.20944/preprints201911.0035.v1
Peer-reviewed version available at Sensors 2019, 20, 112; doi:10.3390/s20010112
3 of 16
Sensors can help to know the environment by detecting objects and some of their characteristics.
63
However, when the objects detected have to be classified and recognised, a set of patterns with which
64
to compare [
11
] is necessary. For example, the shape of a box can be detected by means of a 3D sensor,
65
but the same box can have different textures, so it is also necessary other type of sensor, for example an
66
RGB sensor, to recognise what type the box is. Therefore, using heterogeneous sensors to detect and
67
recognise the objects placed in an environment can increase the probability of success recognising the
68
right object. When working with heterogeneous sensors, their information must be merged, usually
69
remotely creating sensor networks [
12
]. As a summary, acquiring characteristics of the environment to
70
associate them with specific objects implies a sequence of actions shown in the Figure 2.71
Figure 2.
Overview of the components of the recognition process in the integration of sensory
information
The inclusion of object detection in the environment map adds a difficulty and force the use of
72
advanced sensors. Consequently, when there are many sensors the data fusion depends on the fusion
73
mechanism [
13
]. Once a certain precision in the detection of the object and its characteristics has been
74
achieved, it should be possible to classify the object [
14
]. The classification of the object requires the
75
use of patterns in order to compare the percentage of similarity [
15
]. Therefore, in an object recognition
76
system, a database of patterns is necessary. In the current models presented in Figure 1, the different
77
components of the process presented in Figure 1can be located at any level. The sensors belong to the
78
Edge, while classification and integration are processes that can be on the local device (Edge), on any
79
nearby device (Fog) or on dedicated servers (Cloud).80
The addition of micro-controllers and micro-processors to the sensors devices, has increased the81
information capacity that sensors can provide. These devices are usually called respectively smart, or
82
intelligent sensors [
16
]. When the sensor includes some advanced processing and, in some occasions,
83
actuators, some authors call them smart devices [
17
]. Adding communication interfaces allows smart
84
devices to share information and, consequently increase the knowledge of the environment. The use of
85
smart devices is growing from the environment like Smart cities [
18
] to the concept of Smart Objects
86
when these devices are included into the daily life of people [19]87
Consequently, the current situation is that sensors can send processed information rather than
88
raw data. The result is that sensor networks become into distributed systems that integrate sensor
89
information in order to take advantage of the processed information [
20
]. When there are different
90
distributed devices, there are some interesting problems to solve. One of the problems is to achieve
91
a balance between the number of devices used and the correct use of their sensors. That is, when a
92
new device is introduced, its sensors should help increase the probability of success when detecting
93
and recognising an object. Consequently, the composition and connection between the devices will
94
determine how to recognise the objects. For example, two devices with an RGB sensor can recognise
95
the same texture with a similar probability. However, the probability of success could increase by
96
using another type of sensor which reinforces the process, for example, a thermal camera that can
97
distinguish between different ink types. The diversity of available sensors is especially relevant in
98
smart cities. In the same street, it is possible to find traffic cameras at fixed points, but also navigation
99
cameras in vehicles. Both types of cameras can cooperate for the identification of objects on the tracks.
100
In this way, it is possible to access a system that allows to distinguish between authorised objects (such
101
Preprints (www.preprints.org) | NOT PEER-REVIEWED | Posted: 4 November 2019 doi:10.20944/preprints201911.0035.v1
Peer-reviewed version available at Sensors 2019, 20, 112; doi:10.3390/s20010112
4 of 16
as cleaning and work temporal signals) and objects not allowed (such as garbage or even potentially
102
dangerous objects).103
To study how different types of devices can cooperate, the Smart Resource model used in previous
104
researches has been used [
21
]. Smart resources allow high connection flexibility since the features
105
are offered as services. Services offered depend on the available sensors and the computing capacity
106
of each smart resource. Clients determine the necessary services, establishing a connection topology
107
depending on their needs. For example, in the case of a smart resource that detects and identifies
108
an object with a high probability, more information to corroborate the identified object could not be
109
required. However, if the probability is low, the smart resource will need other measurements from
110
other sensors that allow to increase the probability to identify successfully the object.111
Figure 3shows an example in which various devices have to detect objects in an urban
112
environment. If both vehicles have cameras, they will be able to detect objects based on the patterns
113
they have and the type of camera. When the vehicles are in a nearby environment, they will be able to
114
dialogue in order to increase the certainty of their observations. In this way, if the driver of the bike
115
is interested in looking for a waster bin and the electric scooter has recognised the object with more
116
certainty, the electric scooter will be able to facilitate the location of the waster bin to the driver of the
117
bike.118
Figure 3.
Urban environment in which various smart devices can collaborate in order to detect objects
more accurately
Integration of sensory information is produced along with all levels of the architecture and
119
provides a layer over sensory fusion to enrich the semantic meaning of the information provided to
120
other components. Research into the integration of sensory information is based on the enrichment
121
of the semantic meaning of the information depending on the architecture level where is integrated.
122
The more architecture level, the more semantic meaning. For example, in the lower level of the
123
architecture could be interesting to provide all the values of a temperature sensor but in a higher level
124
the interesting information could be the average of these values and its comparison with the values
125
of other temperature sensors. Sensory information is shared by the components of the architecture
126
transparently to their location. Components could be close or located in the cloud. Therefore, it
127
is interesting to study how the functionalities offered by the system devices improve through the
128
integration of sensory information.129
The aim of this paper is to present the study and implementation of a solution to integrate sensory
130
information in order to increase certainty in the object recognition process. Currently, the elements of
131
smart cities are increasing their capacity of processing and communication. Therefore, it is interesting
132
to study how the efficiency of the systems can improve when their heterogeneous elements collaborate
133
in order to integrate the information they have.134
Preprints (www.preprints.org) | NOT PEER-REVIEWED | Posted: 4 November 2019 doi:10.20944/preprints201911.0035.v1
Peer-reviewed version available at Sensors 2019, 20, 112; doi:10.3390/s20010112
5 of 16
Paper has been organised as follow. Once the aim of the investigation has been contextualised,
135
section 2, materials and methods, describes the proposed architecture and its classification according136
to the models presented above. Next, the system implemented with their sensors and methods is
137
presented. The section 3 presents the scenario on which the system has been tested and the results of
138
the experiments performed for the recognition of two different objects. The results obtained verify
139
how the integration of information from the services provided by the smart resources improves the
140
accuracy to detect objects. Next, section 4 of Discussion analyses the results and the repercussions
141
of the proposed architecture. Finally, the conclusions are drawn and some of the future lines to be
142
developed are presented.143
2. Materials and Methods144
2.1. System Architecture145
According to the concept of cloud, fog and edge computing, and the need to have elements
146
capable of interacting between all layers, an architecture has been designed whose components are
147
based on the Smart Resources (Figure 4).148
Figure 4.
Concept and components of an Smart Resource. From interaction with the physical world
(left) to interaction with the rest of the system (right)
An intelligent resource is defined as an element of intelligent control that offers its capabilities for
149
interaction with the environment through services. As an intelligent control element, it has a direct
150
connection to the physical environment through a set of sensors and actuators. In order to carry out
151
the control actions, the smart resource has the functions of acquisition, reactive processing and action.
152
Up to this point, a smart resource does not differ from a control node. For example, a traffic light
153
with a VGA camera, a set of relays to control the light, and an Arduino Microcontroller with network
154
connection constitute a control node. The role of the microcontroller is to acquire and transmit images,
155
and to receive orders to turn on or off the lights. However, depending on the processing capacity of
156
the smart resource and the functionalities it offers, there will be a set of processes with a higher level of
157
intelligence. If the previous device was provided with a more powerful microprocessor that allows, for
158
example, to store a history to infer the evolution of traffic, or to detect the number of vehicles waiting,
159
it would be a Smart Resource. These advanced features are offered through services to other Smart
160
Resources or system elements.161
Preprints (www.preprints.org) | NOT PEER-REVIEWED | Posted: 4 November 2019 doi:10.20944/preprints201911.0035.v1
Peer-reviewed version available at Sensors 2019, 20, 112; doi:10.3390/s20010112
6 of 16
Figure 5.
Location of the Smart Resources in the architecture. Edge interaction is possible when smart
resources are physically in contact, that is, they overlap their operating ranges. The interaction in the
Fog allows communication with real-time restrictions between Smart Resources without the need to
share the physical space. The cloud interaction allows connection to other components and data servers
without real-time restrictions.
One of the characteristics of smart city architectures is the interconnection capacity between
162
elements or components. Connections between elements must be transparent to their location in the
163
fog or in the cloud. A Smart Resource can have a communications system that allows connections at
164
both the cloud and fog levels, or even at the edge level. In this last case, smart resources are physically
165
very close (direct contact on the Edge) and the communication channels used are specific, such as
166
Bluetooth or direct connection such as I2C.167
Previously, has been discussed the importance of having mechanisms for semantic information
168
conversion. For example, when a big number of calculations are required on a historical archive of
169
temperature samples to obtain an daily average or predict a trend with a time horizon of one day.
170
These semantic conversions require knowledge (for example the temperatures history found in the
171
Cloud), and some data found directly within the smart resources. The element that allows semantic
172
conversions between Fog and Cloud, is called the Semantic Gateway. The Semantic Gateway acts as a
173
broker that can provide information to the Edge.174
Connections in the Edge are possible when two smart resources are in the same physical space or
175
operating range. This operating range is defined as the physical space where the sensors and actuators
176
of an Smart Resource interact. For example, when a person rides on a bicycle, the sensors of the bicycle
177
can connect and collaborate with the sensors of the person. The position sensors of a mobile device
178
of the cyclist can collaborate with a camera installed on the bicycle to transmit the route or recognise
179
objects. This collaboration is interesting because the same device can collaborate with other devices
180
during different intervals of time. Therefore, a communication protocol that allows the collaboration
181
between heterogeneous devices is necessary. Figure 6shows the proposed protocol oriented to services.
182
183
Preprints (www.preprints.org) | NOT PEER-REVIEWED | Posted: 4 November 2019 doi:10.20944/preprints201911.0035.v1
Peer-reviewed version available at Sensors 2019, 20, 112; doi:10.3390/s20010112
7 of 16
Figure 6.
Communication protocol diagram between two Smart Resources at the Edge level
(Edge.SR.link).
The diagram in figure 6is placed into the application level. When two smart resources have
184
been connected, both offer their services by exchanging a JSON message. A smart resource i offers its
185
services to another smart resource j. When the smart resource j requires a service of the smart resource
186
i, it will request it and the smart resource i will generate a response with the result or information
187
provided by the service. That response could include an action, too.188
At the Fog level, a channel with connection management is necessary, which allows
189
communication between elements independent their location. The Publish-Subscribe [
22
] paradigm
190
is one of the most suitable since it allows to uncouple physically the devices involved in the
191
communication and connect each device to the information that they are interested in. Communications
192
system used is the CKMultiPeer, described in [
23
], CKMultiPeer is based in the Data Distribution
193
Service (DDS) model, widely used in critical distributed systems [
24
]. Communication through
194
CKMultipeer is done through Topics. A topic is a common communication space in which information
195
can be published. The element that publishes the information writes the data in a specific Topic.
196
The elements that wish to know the information of the Topic, subscribe to this Topic and, when the
197
information changes, they receive a message with the new data.198
The communication protocol between two smart resources using the CKMultipeer at the Fog level
199
is shown in detail in the figure 7200
Figure 7.
Communication protocol diagram between the CKMultipeer broker and the Smart Resources
at the Fog level (Fog.SR.link).
The communication protocol at the Fog level is based on the publish-subscriber paradigm. The
201
exchange of services is carried out through topics and the Smart Resources interact decoupled. First, the
202
Smart Resources request at the Fog the list of available Topics through the broker called CKMultipeer.
203
As a result, they receive a JSON file with the list of services offered, features and service quality
204
Preprints (www.preprints.org) | NOT PEER-REVIEWED | Posted: 4 November 2019 doi:10.20944/preprints201911.0035.v1
Peer-reviewed version available at Sensors 2019, 20, 112; doi:10.3390/s20010112
8 of 16
parameters [
25
]. When the Smart Resource j requires a service, it subscribes to the associated topic
205
through the CKMultipeer broker. CKMultipeer is responsible to send automatically to subscribers all
206
information published in the topics. When the smart resource, that provides the service required (smart
207
resource i, in the figure 7), publishes new information in the associated topic, the CKMultipeer notifies
208
and sends the new information to all smart resources subscribed. Subscribers read asynchronously the
209
information, that is, by means of a notifications model based on events. When the smart resource j does
210
not need to receive more information, it can unsubscribe from the corresponding topic. Additionally,
211
Smart Resources can read any topic synchronously without any subscription.212
2.2. Smart resource-based object recognition213
As described in the previous section, the classification and integration steps imply the use of
214
patterns and the decision is based on the probabilities of each pattern provided from the different
215
sensors. The process is described in Figure 8.216
Figure 8.
Smart Resources scheme and connection with the patterns for object recognition based on the
analysis of the similarity between measurements and patterns.
Taking into account that each object j has a specific pattern from each type of sensor i. A pattern is
217
built with the object characteristics detected from a type of sensor. If the whole process was centralised,
218
each device should have access to as many sensors as possible, and the patterns to compare with those
219
sensors. The storage load of all the patterns and the processing load of all the sensors in the same
220
device could be too much. In addition, in the case that a sensor obtains a very high probability with a
221
specific object pattern, it would be not necessary to continue processing more sensors, unless a 100 %
222
certainty was required. Therefore, a distributed system can be an adequate and efficient solution [
26
].
223
In this system, only when a device has a low certainty in the recognition of an object, it should request
224
more results from other devices that have recognised that object. So, a device A needs to have only
225
the patterns of the sensors used. The device A should be able to consult a device B about an object, in
226
order to reinforce the probability of the recognised object. In order to distribute the object recognition
227
process, a system based on distributed intelligent devices, called Smart Resources, has been developed.
228
The smart resource model is described in [
27
]. A smart resource is an extension of a smart device,
229
that offers services to the others system elements. For example, in the system described in the next
230
Preprints (www.preprints.org) | NOT PEER-REVIEWED | Posted: 4 November 2019 doi:10.20944/preprints201911.0035.v1
Peer-reviewed version available at Sensors 2019, 20, 112; doi:10.3390/s20010112
9 of 16
section, smart resources offer as a services, among others, their position in the map, and the probability
231
detected for each pattern.232
In order to communicate the devices, a communication system is needed that allows the
233
subscription to specific services, offering a balanced network load. For example, in the Figure 8,
234
the object of pattern 1 may have associated the sensors type 1, 2, and 3. But if there is a device that
235
only has a sensor type 1 and 2, and another device with the type 3 sensor, it is convenient for both
236
devices to send and receive information of a type of pattern and not of a specific device.237
The communication system allows a device to connect to a source of information (the pattern
238
of a specific object) from which to obtain data that can reinforce the identification of a specific object,
239
fulfilling the requirements mentioned above.240
2.3. System implemented241
To validate the architecture and test the operation of smart resources a system with two smart
242
resources has been implemented.243
Figure 9.
The case study used in the experiments. The vehicles are replaced by autonomous robots and
street objects replaced by boxes. The robots can be controlled better than a real bike or scooter, this
allows the experiment to be replicated.
System is shown in figure 10. Two robots Turtlebot [
28
] carry on the smart resources. Any smart
244
resource is composed by one BeagleBone [
29
], the corresponding sensors and one IEEE802.11 interface
245
to communicate between them. In the experiments performed, real-world vehicles are replaced by
246
robots. Using well known and controllable robots, the experiments can be replicated with identical
247
vehicles vehaviour and movements. Turtlebot 1 carry on the Smart Resource 1 and Turtlebot carry on
248
Smart Resource 2. The first smart resource used, has two sensors, a deep camera to detect the geometry
249
and an RGB camera to detect the texture. The second smart resource has only a sensor, a thermal
250
camera that produces an RGB associated to the colour reflected. The colour of the image depends on
251
the temperature and is directly associated with the ink composition.252
Preprints (www.preprints.org) | NOT PEER-REVIEWED | Posted: 4 November 2019 doi:10.20944/preprints201911.0035.v1
Peer-reviewed version available at Sensors 2019, 20, 112; doi:10.3390/s20010112
10 of 16
Figure 10.
Details of the system implemented to perform the experiments and the corresponding step
associated with the component (top of the figure).
The reason for using a different RGB sensor is to be able to use the same recognition algorithms
253
(2D image), but with different patterns of the same object.254
2.4. Scenario and experiment performed255
The objective of the experiment is to characterise the performance of the presented architecture.
256
The experiments that were performed evaluate the obtained results in both single and multi-robot
257
approaches. While the single robot experiments offer information for characterising the access to
258
on-board smart devices, the multi-robot approach shows how to deal with spatially decoupled
259
sensor. In order to provide these statistical values, a set of environment features are recognised
260
and integrated as environment knowledge. This set includes heterogeneous object samples 11, which
261
are representative for testing.262
Figure 11.
Objects (boxes) used in the experiments. Other objects are used to measure the false positives
rate.
In the proposed system shown in the Figure 11, two specific objects are proposed to be detected
263
and recognised by means of two different smart resources. Both objects have the same geometry
264
(boxes) but different textures (the box of a Xtion and the box of a BeagleBone).265
The patterns used to recognize the textures have been the images of the six sides of each face
266
of the boxes to be detected. Both smart resources contain both the texture (images) and geometry
267
Preprints (www.preprints.org) | NOT PEER-REVIEWED | Posted: 4 November 2019 doi:10.20944/preprints201911.0035.v1
Peer-reviewed version available at Sensors 2019, 20, 112; doi:10.3390/s20010112
11 of 16
(3D shape) patterns of the two boxes. Therefore, the matching pattern-sensor measurement will be
268
performed in both smart resources. Figure 12 shows the processes that will be carried out based on
269
smart resources and boxes.270
Figure 12.
Objects (boxes) used in the experiments, sensor into the smart resources, and patters used to
recognise the boxes.
The experiment starts when the two robots find the box. First was tested with the Beaglebone box,
271
and after the Xtion box. When Smart Resource 1 detects a box with a reasonable prospect of certainty
272
(upper than 0.500) publish the estimated box position, time and certainly value in the topic ’BBBox’ or
273
’XtionBox’. A topic is a common space to share data in a Publish-subscribe system. Smart Resource 2,
274
receives the data of the certainly of both boxes and integrates the information with the data obtained
275
from their sensors. To check the integration efficiency compensates the transmission time, all data
276
path between both smart resources must be checked (Figure 13)277
Figure 13.
Data path of the experiment performed. From the data adquisition (left) by means of the
smart resources sensors to the result obtained with the integration of the information (right).
At the bottom of Figure 13 the times used by each smart resource to classify an object are shown.
278
The time
ta
refers to the acquisition time of the images by the cameras used. In the experiments it
279
remains constant depending on the sensor used.280
Preprints (www.preprints.org) | NOT PEER-REVIEWED | Posted: 4 November 2019 doi:10.20944/preprints201911.0035.v1
Peer-reviewed version available at Sensors 2019, 20, 112; doi:10.3390/s20010112
12 of 16
The time
tcl
is the time it takes to classify the images according to the available patterns. A
281
Reliability-Based Particle Filter was used for classification, which is shown in [
30
]. In order not to alter
282
the experiments, the patterns were already preloaded in each intelligent resource. When the pattern is
283
not available, the Smart Resource has to request it from the fog through the CKMultipeer or even from
284
the cloud.285
The time
tf
is the time it takes to fusion various results. An Extended Kalman Filter (EKF) very
286
used in similar environments was used [
31
]. It is important that, once the object detected by the camera
287
is classified, the percentage of certainty is used as the inverse error (the greater certainty, the less error)
288
in the measurement of a sensor. That is, when comparing an image with a pattern, for example the
289
BBBox box, it is considered a BBBox box sensor with a specific percentage of certainty. The integration
290
must provide an increase in certainty in the recognition of an object. But for integration, the smart
291
resources must communicate between them. The time
tc
is the communications time. CKMultipeer
292
was used on a 54Mpps WiFi network.293
The time
ti
is the time of the integration of the results. A summary of the average times obtained
294
is shown in the results section.295
3. Results296
3.1. Latency time297
Experiments that are detailed in this section make use of the robotic platform Turtlebot 2 [
32
]. Two
298
Turtlebot robots with heterogeneous sensor configuration have been designed for these experiments.
299
Every sensor has been integrated as a part of a different Smart Resource. Therefore, sensory information
300
and its management are always accessed through the distributed services that are provided by
301
the related Smart Resource. First Turtlebot configuration implements an Asus Xtion Pro(depth
302
camera) Smart Resource as a 3D sensor [
33
], and a monocular RGB camera Smart Resource as a
303
2D sensor. Therefore, these experiments include two different feature classifications that involve
304
heterogeneous magnitude observations in the 2D and the 3D plane. Second Turtlebot has endowed
305
with thermal camera Smart Resource, which adds a new magnitude classification in the 2D plane.
306
The smart resources were implemented on a beaglebone board [
34
]. This board has an ARM Cortex
307
8 microprocessor with 1 GHz processing frequency. In order to make different experiments with
308
different smart resource links, we use the CKMultipeer middleware to Fog and Cloud level. The
309
average times of the message latency times obtained in these channels are obtained using the protocol
310
presented in the previous section.311
The figure 13 shows the total time to obtain a final result of information integration, that is
312
the addition of all times involved in the data path. Times have been taken from processes running
313
independently. Table 1shows the estimated process times based on the decoupled tasks and results. In
314
the case that a smart resource has two sensors (turtlebot 1) common times, as
ta
or
tcl
, considered are
315
the worst time between all sensors times.316
Table 1.
Resuls of the average estimated times in each stage of the process. The acronym "n.a". (not
appropriate) means that time is not provided because this phase is not done in the experiment.
Object SR tatcl tftctiTotal
BBox turtlebot 1 16 ms. 12 ms. 25 ms. n.a. n.a. 53 ms.
turtlebot 2 32 ms. 18 ms. n.a. n.a. n.a. 50 ms.
Integration (Fog) n.a. n.a. n.a. 244 ms. 23 ms 320 ms.
XtionBox turtlebot 1 16 ms. 15 ms. 24 ms. n.a. n.a. 55 ms.
turtlebot 2 32 ms. 17 ms. n.a. n.a. n.a. 49 ms.
Integration (Fog) n.a. n.a. n.a. 244 ms. 25 ms. 318 ms.
Preprints (www.preprints.org) | NOT PEER-REVIEWED | Posted: 4 November 2019 doi:10.20944/preprints201911.0035.v1
Peer-reviewed version available at Sensors 2019, 20, 112; doi:10.3390/s20010112
13 of 16
It should be noted that the process times are similar, due to both smart resources use the same
317
libraries, microcontroller board, and the boxes to detect are quite similar. It can be see, CKMultipeer
318
introduces high latency. To check if information integration is profitable, it is necessary to study if the
319
latency time and spent to increase object recognition can justify a low percentage of certainly improved.
320
The ratio obtained using local integration and colaborative integration is studied in the next subsection.
321
322
3.2. Object recognition323
In the proposed scenario, the two robots (Turtlebot 1 and Turtlebot 2) navigate until they detect
324
the same set of objects. Both robots have a different perspective, and they are located correctly on the
325
map. To show the process better, the results of each detected object have been shown separately.326
The table 2shows the results obtained when the data from the sensors of the first robot (Turtlebot
327
1) is compared with the geometries of boxes. It can be seen that both are very similar, with a certain
328
difference favourable to the box of the BeagleBone. In the case of texture, the RGB sensor clearly
329
detects a tendency to the correct object.330
Table 2.
Results in the percentage of certainty in the correct object detection applying the integration
method with two similar objects patterns (BeagleBone box)
Object: BeagleBone box Turtlebot 1 Turtlebot 2
Object Pattern Geometry Texture Fusion Texture Integration
BeagleBone box 0.726 0.671 0.792 0.789 0.824
Asus Xtion box 0.647 0.127 0.651 0.192 0.658
As can be seen in the table, when the Smart Resource 2 requests the system (through the
331
CKMultiPeer topics) the certainty of the object, the correct object is always reinforced. Data in
332
the opposite direction, when the integration is done by Smart Resource 1, and the information of
333
certainty is provided by Smart Resource 2, are similar. Consequently, it is possible that two uncoupled
334
and heterogeneous systems collaborate to improve their perceptions.335
Table 3.
Results in the percentage of certainty in the correct object detection applying the integration
method with two similar objects patterns (XTion box)
Object: Xtion box Turtlebot 1 Turtlebot 2
Object Pattern Geometry Texture Fusion Texture Integration
BeagleBone box 0.243 0.231 0.253 0.210 0.259
Asus Xtion box 0.851 0.712 0.886 0.812 0.902
The Turtlebot 2, has also detected the two objects, but only by means of texture. Consequently,
336
the smart resource of the Turtlebot 2, requests the texture service to the , and upon receiving the data,
337
the correct object is reinforced. When merging the information, in the Turtlebot 2 the object recognized
338
as the BeagleBone box will be reinforced much more than the Xtion box object (table 3). In the case of
339
the second object, you can check the same trend, but it is the box of the Xtion that is reinforced.340
4. Discussion341
In the experiments and results presented previously, it can be seen how the local processing
342
of information is less expensive in time than the information integration. On average, information
343
integration spends, in terms of time, six times than the classification and subsequent local integration.
344
However, the information integration provides an improvement in object recognition. Obviously, with
345
the results obtained, the waiting time is high only to improve a low percentage in recognition. The
346
presented system allows a smart resource to decide if the percentage of local certainty recognizing an
347
Preprints (www.preprints.org) | NOT PEER-REVIEWED | Posted: 4 November 2019 doi:10.20944/preprints201911.0035.v1
Peer-reviewed version available at Sensors 2019, 20, 112; doi:10.3390/s20010112
14 of 16
object is enough to make a decision, or if it is better to use information integration with other smart
348
resources to reinforce certainty. For example, if a user is looking for a specific object such as a waste
349
bin, it is probably enough to recognize it with low certainty and wait for the vehicle to recognize
350
another with a higher percentage. After all, there are many waste bin in the city. However, if the object
351
recognized with little certainty is a traffic cone, which indicates a problem and a potential accident, it is
352
better for the vehicle to decrease their speed and ask the nearby smart resources for more information
353
to integrate it, and increase the certainty rate. From the result of the information integration, a smart
354
resource can take a decision, for example, if it is possible to avoid the obstacle or better stops the
355
vehicle.356
The system used and experiments performed are placed on the Fog level. Fog level implies the use
357
of a common communication channel between all smart resources. Additionally, the communication
358
channel, must manage the connection between smart resources and third parties. The use of a
359
common communication channel provides smart resources of transparent and decoupled access to
360
information. If two, or more, smart resources need to be coupled, for example, to share high-speed
361
or high-frequency rate information, it is necessary to use a communication channel close to the Edge
362
level. This communications channel can be oriented to a physical connection, such as I2C or wireless
363
such as Bluetooth.364
To know what improvements the Fog brings, local accuracy has been compared with the accuracy
365
obtained by the collaboration of the two smart resources 4. In this table, optimization is obtained by
366
dividing between the best local accuracy and fog accuracy. The cost of optimization is obtained by
367
dividing into optimization and maximum latency. This last parameter indicates how many milliseconds
368
we need to spent to increase the accuracy in one percentual point.369
Table 4. Comparison results between local and fog accuracy and latency, and optimization
Local Fog
Object SR Accuracy Latency Accuracy Latency Optimization Optimization Cost
BBBox 1 0,792 53 ms.
2 0,789 50 ms. 0.824 320 ms. 4% 79.2 ms
xTionbox 1 0.886 55 ms.
2 0.812 49 ms. 0.902 318 ms. 2% 176.1 ms.
In the case of the study presented the Cloud has not been considered. This is because this level
370
manage large amount of complex data and consequently, computation is expeensive in time. Cloud is
371
used, mostly, to store new patterns or update existing. The possibility of use Cloud as a repository,
372
allow smart resources to have more detection power and adapt the patterns used locally to different
373
environments. Cloud allows smart resources to push the power limits of microprocessors computation
374
and storage.375
5. Conclusions376
The combination of sensors, micro-controllers and communications allows cities to implement
377
intelligent distributed systems. Because of the large number and variety of sensors existing in smart378
cities, it is convenient to organize them into devices that can interact with each other. In this paper has
379
been presented how considering services as a communication method in a smart device, it allows to
380
integrate information from different sensors. The experiments carried out to verify the integration of
381
the information, increase notably the success in the detection of an object.382
Based on the results, it is possible to apply information integration in smart cities as a method
383
to improve the services offered by the different elements. Based on the experiments carried out,
384
it is convenient to test how the smart resources employees detect other objects. The paper shows
385
the experiments performed with a system with two smart resources detecting two different objects,
386
adding more objects and smart resources it is possible to study the cost in workload to recognise an
387
Preprints (www.preprints.org) | NOT PEER-REVIEWED | Posted: 4 November 2019 doi:10.20944/preprints201911.0035.v1
Peer-reviewed version available at Sensors 2019, 20, 112; doi:10.3390/s20010112
15 of 16
environment. Distributing the objects characteristics to be recognised, it is possible to balance the
388
workload to use an optimal amount of system resources, such as a processing time or communications
389
load.390
Funding:
This research was funded by the Spanish Science and Innovation Ministry grant number
391
MICINN: CICYT project PRECON-I4: “Predictable and dependable computer systems for Industry 4.0”
392
TIN2017-86520-C3-1-R.393
Conflicts of Interest:
The authors declare no conflict of interest. The founders had no role in the design of the
394
study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to
395
publish the results.396
Abbreviations397
The following abbreviations are used in this manuscript:398
399
DDS Data Distribution Service
IoT Internet of Things
RAMI Reference Architectural Model Industrie 4.0
SR Smart Resource
400
References401
1.
Munera, E.; Poza-Lujan, J.L.; Posadas-Yagüe, J.L.; Simó-Ten, J.E.; Noguera, J. Dynamic reconfiguration of a
402
rgbd sensor based on qos and qoc requirements in distributed systems. Sensors 2015,15, 18080–18101.403
2.
Roscia, M.; Longo, M.; Lazaroiu, G.C. Smart City by multi-agent systems. 2013 International Conference
404
on Renewable Energy Research and Applications (ICRERA). IEEE, 2013, pp. 371–376.405
3.
Ström, D.P.; Nenci, F.; Stachniss, C. Predictive exploration considering previously mapped environments.
406
Robotics and Automation (ICRA), 2015 IEEE International Conference on. IEEE, 2015, pp. 2761–2766.407
4.
Zdraveski, V.; Mishev, K.; Trajanov, D.; Kocarev, L. ISO-standardized smart city platform architecture and
408
dashboard. IEEE Pervasive Computing 2017,16, 35–43.409
5. Hankel, M.; Rexroth, B. The reference architectural model industrie 4.0 (rami 4.0). ZVEI, April 2015.410
6.
Dastjerdi, A.V.; Buyya, R. Fog computing: Helping the Internet of Things realize its potential. Computer
411
2016,49, 112–116.412
7.
Zanella, A.; Bui, N.; Castellani, A.; Vangelista, L.; Zorzi, M. Internet of things for smart cities. IEEE Internet
413
of Things journal 2014,1, 22–32.414
8.
Cao, J.; Song, C.; Peng, S.; Xiao, F.; Song, S. Improved traffic sign detection and recognition algorithm for
415
intelligent vehicles. Sensors 2019,19, 4021.416
9.
García, C.G.; Meana-Llorián, D.; G-Bustelo, B.C.P.; Lovelle, J.M.C.; Garcia-Fernandez, N. Midgar: Detection
417
of people through computer vision in the Internet of Things scenarios to improve the security in Smart
418
Cities, Smart Towns, and Smart Homes. Future Generation Computer Systems 2017,76, 301–313.419
10.
Guerrero-Gómez-Olmedo, R.; López-Sastre, R.J.; Maldonado-Bascón, S.; Fernández-Caballero, A. Vehicle
420
tracking by simultaneous detection and viewpoint estimation. International Work-Conference on the
421
Interplay Between Natural and Artificial Computation. Springer, 2013, pp. 306–316.422
11.
Lim, G.H.; Suh, I.H.; Suh, H. Ontology-based unified robot knowledge for service robots in indoor
423
environments. IEEE Transactions on Systems, Man, and Cybernetics-Part A: Systems and Humans
2011
,
424
41, 492–509.425
12.
Zhang, J. Multi-source remote sensing data fusion: status and trends. International Journal of Image and
426
Data Fusion 2010,1, 5–24.427
13.
Deng, X.; Jiang, Y.; Yang, L.T.; Lin, M.; Yi, L.; Wang, M. Data fusion based coverage
428
optimization in heterogeneous sensor networks: A survey. Information Fusion
2019
,52, 90 – 105.
429
doi:https://doi.org/10.1016/j.inffus.2018.11.020.430
14.
Azim, A.; Aycard, O. Detection, classification and tracking of moving objects in a 3D environment.
431
Intelligent Vehicles Symposium (IV), 2012 IEEE. IEEE, 2012, pp. 802–807.432
15.
Jain, A.K.; Duin, R.P.; Mao, J. Statistical pattern recognition: A review. IEEE Transactions on pattern analysis
433
and machine intelligence 2000,22, 4–37.434
Preprints (www.preprints.org) | NOT PEER-REVIEWED | Posted: 4 November 2019 doi:10.20944/preprints201911.0035.v1
Peer-reviewed version available at Sensors 2019, 20, 112; doi:10.3390/s20010112
16 of 16
16. Yurish, S.Y. Sensors: smart vs. intelligent. Sensors & transducers 2010,114, I.435
17. Poslad, S. Ubiquitous computing: smart devices, environments and interactions; John Wiley & Sons, 2011.436
18.
Hancke, G.P.; Hancke Jr, G.P.; others. The role of advanced sensing in smart cities. Sensors
2012
,13, 393–425.
437
19.
Lazar, A.; Koehler, C.; Tanenbaum, J.; Nguyen, D.H. Why we use and abandon smart devices. Proceedings
438
of the 2015 ACM International Joint Conference on Pervasive and Ubiquitous Computing. ACM, 2015, pp.
439
635–646.440
20.
Chen, Y. Industrial information integration—A literature review 2006–2015. Journal of Industrial Information
441
Integration 2016,2, 30–64.442
21.
Rincon, J.; Poza-Lujan, J.L.; Julian, V.; Posadas-Yagüe, J.L.; Carrascosa, C. Extending MAM5 meta-model
443
and JaCalIV E framework to integrate smart devices from real environments. PloS one 2016,11, e0149665.444
22.
Eugster, P.T.; Felber, P.A.; Guerraoui, R.; Kermarrec, A.M. The many faces of publish/subscribe. ACM
445
computing surveys (CSUR) 2003,35, 114–131.446
23.
Simó-Ten, J.E.; Munera, E.; Poza-Lujan, J.L.; Posadas-Yagüe, J.L.; Blanes, F. CKMultipeer: Connecting
447
Devices Without Caring about the Network. International Symposium on Distributed Computing and
448
Artificial Intelligence. Springer, 2017, pp. 189–196.449
24.
Tijero, H.P.; Gutiérrez, J.J. Criticality Distributed Systems through the DDS Standard. Revista Iberoamericana
450
de Automática e Informática industrial 2018,15, 439–447. doi:doi.org/10.4995/riai.2017.9000.451
25.
Poza, J.L.; Posadas, J.L.; Simó, J.E. From the queue to the quality of service policy: A middleware
452
implementation. International Work-Conference on Artificial Neural Networks. Springer, 2009, pp.
453
432–437.454
26.
Amurrio, A.; Azketa, E.; Javier Gutierrez, J.; Aldea, M.; Parra, J. A review on optimization techniques for
455
the deployment and scheduling of distributed real-time systems. RIAI - Revista Iberoamericana de Automatica
456
e Informatica Industrial 2019,16, 249–263.457
27.
Munera, E.; Poza-Lujan, J.L.; Posadas-Yagüe, J.L.; Simó-Ten, J.E.; Noguera, J.F.B. Dynamic Reconfiguration
458
of a RGBD Sensor Based on QoS and QoC Requirements in Distributed Systems. Sensors
2015
,
459
15, 18080–18101. doi:10.3390/s150818080.460
28. Garage, W. Turtlebot. Website: http://turtlebot. com/last visited 2011, pp. 11–25.461
29. Coley, G. Beaglebone black system reference manual. Texas Instruments, Dallas 2013.462
30.
Munera Sánchez, E.; Muñoz Alcobendas, M.; Blanes Noguera, J.; Benet Gilabert, G.; Simó Ten, J. A
463
reliability-based particle filter for humanoid robot self-localization in RoboCup Standard Platform League.
464
Sensors 2013,13, 14954–14983.465
31.
Chow, J.; Lichti, D.; Hol, J.; Bellusci, G.; Luinge, H. Imu and multiple RGB-D camera fusion for assisting
466
indoor stop-and-go 3D terrestrial laser scanning. Robotics 2014,3, 247–280.467
32.
Rogers III, J.G.; Nieto-Granda, C.; Christensen, H.I. Coordination strategies for multi-robot exploration
468
and mapping. Experimental Robotics. Springer, 2013, pp. 231–243.469
33.
Chen, L.; Wei, H.; Ferryman, J. A survey of human motion analysis using depth imagery. Pattern Recognition
470
Letters 2013,34, 1995–2006.471
34.
Chianese, A.; Piccialli, F.; Riccio, G. Designing a smart multisensor framework based on beaglebone black
472
board. In Computer Science and its Applications; Springer, 2015; pp. 391–397.473
Preprints (www.preprints.org) | NOT PEER-REVIEWED | Posted: 4 November 2019 doi:10.20944/preprints201911.0035.v1
Peer-reviewed version available at Sensors 2019, 20, 112; doi:10.3390/s20010112
Article
Full-text available
This paper proposes the use of the AHP-Gaussian method to support the selection of a smart sensor installation for an electric motor used in an escalator in a subway station. The AHP-Gaussian methodology utilizes the Analytic Hierarchy Process (AHP) framework and is highlighted for its ability to save the decision maker’s cognitive effort in assigning weights to criteria. Seven criteria were defined for the sensor selection: temperature range, vibration range, weight, communication distance, maximum electric power, data traffic speed, and acquisition cost. Four smart sensors were considered as alternatives. The results of the analysis showed that the most appropriate sensor was the ABB Ability smart sensor, which scored the highest in the AHP-Gaussian analysis. In addition, this sensor could detect any abnormalities in the equipment’s operation, enabling timely maintenance and preventing potential failures. The proposed AHP-Gaussian method proved to be an effective approach for selecting a smart sensor for an electric motor used in an escalator in a subway station. The selected sensor was reliable, accurate, and cost-effective, contributing to the safe and efficient operation of the equipment.
Article
Full-text available
At present, the smart city offers the most desired state of urban development, encompassing, as it does, the concept of sustainable development. The creation of a smart city is closely associated with upgrading the construction industry to encompass many emerging concepts and technologies, such as Construction 4.0, with its roots in Industry 4.0, and the deployment of building information modeling (BIM) as an essential tool for the construction industry. Therefore, this paper aims to explore the current state of the art and development trajectory of the multidisciplinary integration of Construction 4.0, Industry 4.0, BIM, and sustainable construction in the context of the smart city. It is the first attempt in the literature to use both macro-quantitative analysis and micro-qualitative analysis methods to investigate this multidisciplinary research topic. By using the visual bibliometric tool, VOSviewer, and based on macro keyword co-occurrence, this paper is the first to reveal the five keyword-constructed schemes, research hotspots, and development trends of the smart city, Construction 4.0, Industry 4.0, BIM, and sustainable construction, from 2014 to 2021 (a period of eight years). Additionally, the top 11 productive subject areas have been identified with the help of VOSviewer software keyword-clustering analysis and application. Furthermore, the whole-building life cycle is considered as an aid to identifying research gaps and trends, providing suggestions for future research with the assistance of an upgraded version of BIM, namely, city information modeling (CIM) and the future integration of Industry 5.0 and Construction 5.0, or even of Industry Metaverse with Construction Metaverse.
Article
Full-text available
Embedded systems used in critical systems, such as aeronautics, have undergone continuous evolution in recent years. In this evolution, many of the functionalities offered by these systems have been adapted through the introduction of network services that achieve high levels of interconnectivity. The high availability of access to communications networks has enabled the development of new applications that introduce control functions with higher levels of intelligence and adaptation. In these applications, it is necessary to manage different components of an application according to their levels of criticality. The concept of “Industry 4.0” has recently emerged to describe high levels of automation and flexibility in production. The digitization and extensive use of information technologies has become the key to industrial systems. Due to their growing importance and social impact, industrial systems have become part of the systems that are considered critical. This evolution of industrial systems forces the appearance of new technical requirements for software architectures that enable the consolidation of multiple applications in common hardware platforms—including those of different criticality levels. These enabling technologies, together with use of reference models and standardization facilitate the effective transition to this approach. This article analyses the structure of Industry 4.0 systems providing a comprehensive review of existing techniques. The levels and mechanisms of interaction between components are analyzed while considering the impact that the handling of multiple levels of criticality has on the architecture itself—and on the functionalities of the support middleware. Finally, this paper outcomes some of the challenges from a technological and research point of view that the authors identify as crucial for the successful development of these technologies.
Article
Full-text available
Traffic sign detection and recognition are crucial in the development of intelligent vehicles. An improved traffic sign detection and recognition algorithm for intelligent vehicles is proposed to address problems such as how easily affected traditional traffic sign detection is by the environment, and poor real-time performance of deep learning-based methodologies for traffic sign recognition. Firstly, the HSV color space is used for spatial threshold segmentation, and traffic signs are effectively detected based on the shape features. Secondly, the model is considerably improved on the basis of the classical LeNet-5 convolutional neural network model by using Gabor kernel as the initial convolutional kernel, adding the batch normalization processing after the pooling layer and selecting Adam method as the optimizer algorithm. Finally, the traffic sign classification and recognition experiments are conducted based on the German Traffic Sign Recognition Benchmark. The favorable prediction and accurate recognition of traffic signs are achieved through the continuous training and testing of the network model. Experimental results show that the accurate recognition rate of traffic signs reaches 99.75%, and the average processing time per frame is 5.4 ms. Compared with other algorithms, the proposed algorithm has remarkable accuracy and real-time performance, strong generalization ability and high training efficiency. The accurate recognition rate and average processing time are markedly improved. This improvement is of considerable importance to reduce the accident rate and enhance the road traffic safety situation, providing a strong technical guarantee for the steady development of intelligent vehicle driving assistance.
Article
Full-text available
A concept guided by the ISO 37120 standard for city services and quality of life is suggested as unified framework for smart city dashboards. The slow (annual, quarterly, or monthly) ISO 37120 indicators are enhanced and complemented with more detailed and person-centric indicators that can further accelerate the transition toward smart cities. The architecture supports three tasks: acquire and manage data from heterogeneous sensors; process data originated from heterogeneous sources (sensors, OpenData, social data, blogs, news, and so on); and implement such collection and processing on the cloud. A prototype application based on the proposed architecture concept is developed for the city of Skopje, Macedonia. This article is part of a special issue on smart cities.
Article
Full-text available
Could we use Computer Vision in the Internet of Things for using pictures as sensors? This is the principal hypothesis that we want to resolve. Currently, in order to create safety areas, cities, or homes, people use IP cameras. Nevertheless, this system needs people who watch the camera images, watch the recording after something occurred, or watch when the camera notifies them of any movement. These are the disadvantages. Furthermore, there are many Smart Cities and Smart Homes around the world. This is why we thought of using the idea of the Internet of Things to add a way of automating the use of IP cameras. In our case, we propose the analysis of pictures through Computer Vision to detect people in the analysed pictures. With this analysis, we are able to obtain if these pictures contain people and handle the pictures as if they were sensors with two possible states. Notwithstanding, Computer Vision is a very complicated field. This is why we needed a second hypothesis: Could we work with Computer Vision in the Internet of Things with a good accuracy to automate or semi-automate this kind of events? The demonstration of these hypotheses required a testing over our Computer Vision module to check the possibilities that we have to use this module in a possible real environment with a good accuracy. Our proposal, as a possible solution, is the analysis of entire sequence instead of isolated pictures for using pictures as sensors in the Internet of Things.
Conference Paper
Full-text available
The ability to explore an unknown environment is an important prerequisite for building truly autonomous robots. The central decision that a robot needs to make when exploring an unknown environment is to select the next view point(s) for gathering observations. In this paper, we consider the problem of how to select view points that support the underlying mapping process. We propose a novel approach that makes predictions about the structure of the environments in the unexplored areas by relying on maps acquired previously. Our approach seeks to find similarities between the current surroundings of the robot and previously acquired maps stored in a database in order to predict how the environment may expand in the unknown areas. This allows us to predict potential future loop closures early. This knowledge is used in the view point selection to actively close loops and in this way reduce the uncertainty in the robot’s belief. We implemented and tested the proposed approach. The experiments indicate that our method improves the ability of a robot to explore challenging environments and improves the quality of the resulting maps.
Conference Paper
Full-text available
We address the problem of vehicle detection and tracking for traffic monitoring in Smart City applications. We introduce a novel approach for vehicle tracking by simultaneous detection and viewpoint estimation. An Extended Kalman Filter (EKF) is adapted to describe the vehicle’s motion when not only the pose of the object is measured, but also its viewpoint with respect to the camera. Specifically, we enhance the motion model with observations of the vehicle viewpoint jointly extracted by the detection step. The approach is evaluated on a novel and challenging dataset with different video sequences recorded at urban environments, which is released with the paper. Our experimental validation confirms that the integration of an EKF with both detections and viewpoint estimations results beneficial.
Article
Full-text available
The inclusion of embedded sensors into a networked system provides useful information for many applications. A Distributed Control System (DCS) is one of the clearest examples where processing and communications are constrained by the client's requirements and the capacity of the system. An embedded sensor with advanced processing and communications capabilities supplies high level information, abstracting from the data acquisition process and objects recognition mechanisms. The implementation of an embedded sensor/actuator as a Smart Resource permits clients to access sensor information through distributed network services. Smart resources can offer sensor services as well as computing, communications and peripheral access by implementing a self-aware based adaptation mechanism which adapts the execution profile to the context. On the other hand, information integrity must be ensured when computing processes are dynamically adapted. Therefore, the processing must be adapted to perform tasks in a certain lapse of time but always ensuring a minimum process quality. In the same way, communications must try to reduce the data traffic without excluding relevant information. The main objective of the paper is to present a dynamic configuration mechanism to adapt the sensor processing and communication to the client's requirements in the DCS. This paper describes an implementation of a smart resource based on a Red, Green, Blue, and Depth (RGBD) sensor in order to test the dynamic configuration mechanism presented.
Conference Paper
Full-text available
The current economic crisis, combined with growing citizen expectations, is placing increasing pressure on European cities to provide better and more efficient infrastructures and services, often for less cost. This trend has contributed to the growing popularity and use of the term 'Smart City' [1]. The Smart City, represent a new way of thinking about urban space by shaping a model that integrates Green Energy Sources and Systems (GESSs), energy efficiency, sustainable mobility, protection of the environment and economic sustainability, that represent the goals for future developments. Smart cities are made by a high level of Information and Communication Technology-ICT- structures able to transmit energy, information flows multidirectional and connect a different sector that include mobility, energy, social, economy. Into Smart Cities transport systems are sustainable, smart grids are enhanced to ensure greater integration capabilities of production plants from renewable sources, public lighting is efficient, the buildings are equipped with sensors and devices aimed at rationalizing consumption energy and create greater awareness on the part of citizens, with the aim of improving the quality of life of people through a new governance of public administration capable of managing this innovation and cultural change. However, while wishing the transformation of cities in smart systems, have not defined models infrastructure, that allow different subsets to communicate and interact, in order to make the concrete realization of a smart city. The objective of this paper is to discuss a model of Smart City with a multi-agent systems and Internet of things, that provides intelligence to a city, as basic infrastructure for a definition of a model repeatable and exportable, so as advocated by the European Community, that is allocating considerable funds (Horizon 2020) for the creation of Smart City.
Article
Sensor networks, as a promising network paradigm, have been widely applied in a great deal of critical real-world applications. A key challenge in sensor networks is how to improve and optimize coverage quality which is a fundamental metric to characterize how well a point or a region or a barrier can be sensed by the geographically deployed heterogeneous sensors. Because of the resource-limited, battery-powered and type-diverse features of the sensors, maintaining and optimizing coverage quality includes a significant amount of challenges in heterogeneous sensor networks. Many researchers from both academic and industrial communities have performed numerous significant works on coverage optimization problem in the past decades. Some of them also have surveyed the current models, theories and solutions on the problem of coverage optimization. However, most of the existing surveys and analytical studies ignore how to exploit data fusion and cooperation of the deployed sensors to enhance coverage performance. In this paper, we provide an insightful and comprehensive summarization and classification on the data fusion based coverage optimization problem and techniques. Aiming at overcoming the shortcomings existed in current solutions, we also discuss the future issues and challenges in this area and sketch a general research framework in the context of reinforcement learning.
Article
The Internet of Things (IoT) could enable innovations that enhance the quality of life, but it generates unprecedented amounts of data that are difficult for traditional systems, the cloud, and even edge computing to handle. Fog computing is designed to overcome these limitations.
Conference Paper
In this paper, we present a framework based on 3D range data to solve the problem of simultaneous localization and mapping (SLAM) with detection and tracking of moving objects (DATMO) in dynamic environments. The basic idea is to use an octree based Occupancy Grid representation to model dynamic environment surrounding the vehicle and to detect moving objects based on inconsistencies between scans. The proposed method for the discrimination between moving and stationary objects without a priori knowledge of the targets is the main contribution of this paper. Moreover, the detected moving objects are classified and tracked using Global Nearest Neighbor (GNN) technique. The proposed method can be used in conjunction with any type of range sensors however we have demonstrated it using the data acquired from a Velodyne HDL-64E LIDAR sensor. The merit of our approach is that it allows for an efficient three dimensional representation of a dynamic environment, keeping in view the enormous amount of information provided by 3D range sensors.