ArticlePDF Available

Abstract and Figures

Object recognition, which can be used in processes such as reconstruction of the environment map or the intelligent navigation of vehicles, is a necessary task in smart city environments. In this paper, we propose an architecture that integrates heterogeneously distributed information to recognize objects in intelligent environments. The architecture is based on the IoT/Industry 4.0 model to interconnect the devices, which are called smart resources. smart resources can process local sensor data and offer information to other devices as a service. These other devices can be located in the same operating range (the edge), in the same intranet (the fog), or on the Internet (the cloud). smart resources must have an intelligent layer in order to be able to process the information. A system with two smart resources equipped with different image sensors is implemented to validate the architecture. Our experiments show that the integration of information increases the certainty in the recognition of objects by 2–4%. Consequently, in intelligent environments, it seems appropriate to provide the devices with not only intelligence, but also capabilities to collaborate closely with other devices.
Content may be subject to copyright.
sensors
Article
Distributed Architecture to Integrate Sensor
Information: Object Recognition for Smart Cities
Jose-Luis Poza-Lujan * , Juan-Luis Posadas-Yagüe , José-Enrique Simó-Ten and
Francisco Blanes
University Institute of Control Systems and Industrial Computing (ai2), Universitat Politècnica de València
(UPV) Camino de Vera, s/n. 46022 Valencia, Spain; jposadas@ai2.upv.es (J.-L.P.-Y.); jsimo@ai2.upv.es (J.E.S.-T.);
fblanes@ai2.upv.es (F.B.)
*Correspondence: jopolu@ai2.upv.es; Tel.: +34-963877000
Received: 1 November 2019; Accepted: 18 December 2019; Published: 23 December 2019
Abstract:
Object recognition, which can be used in processes such as reconstruction of the environment
map or the intelligent navigation of vehicles, is a necessary task in smart city environments. In this paper,
we propose an architecture that integrates heterogeneously distributed information to recognize objects
in intelligent environments. The architecture is based on the IoT/Industry 4.0 model to interconnect
the devices, which are called smart resources. smart resources can process local sensor data and offer
information to other devices as a service. These other devices can be located in the same operating
range (the edge), in the same intranet (the fog), or on the Internet (the cloud). smart resources must have
an intelligent layer in order to be able to process the information. A system with two smart resources
equipped with different image sensors is implemented to validate the architecture. Our experiments
show that the integration of information increases the certainty in the recognition of objects by 2–4%.
Consequently, in intelligent environments, it seems appropriate to provide the devices with not only
intelligence, but also capabilities to collaborate closely with other devices.
Keywords:
smart environment; smart sensors; distributed architectures; object detection; information
integration; smart cities
1. Introduction
The growth of cities has given rise to an environment populated by increasingly intelligent and
more connected devices, practically all of which have sensors and actuators featuring very different
capabilities. Each one of these devices can be considered as a control node; however, as the devices
are interconnected and can communicate to share their resources, the concept of a control node can be
reconsidered as the concept of an intelligent resource that provides services to the rest of the devices [
1
].
In addition, heterogeneous devices provide complementary information, which can be used
to enrich the overall knowledge. Consequently, a distributed system in which the devices are
intelligent can be defined as an intelligent system or smart system [
2
]. Smart systems perform
their tasks in dynamic environments with multiple features and changing conditions; for example,
urban environments are dynamic, unpredictable systems, that provide challenges for the application
of intelligent systems.
Therefore, continuous and accurate knowledge of the environment is necessary to provide
autonomy and interaction. Subsystems, such as robots or vehicle navigation systems, need to know the
environment to perform their tasks, such as the planning of trajectories or the execution of missions [
3
].
Object recognition is one of the typical functionalities required by the elements of a smart city. For
example, vehicles need to recognise traffic signals for autonomous driving [
4
]. Other applications of
the detection of objects in cities are the detection of people [
5
] or the detection of vehicles [
6
], generally
to improve road safety or the comfort of citizens.
Sensors 2020,20, 112; doi:10.3390/s20010112 www.mdpi.com/journal/sensors
Sensors 2020,20, 112 2 of 18
The diversity of available sensors is especially relevant in smart cities. In the same street, it is
possible to find traffic cameras at fixed points, as well as navigation cameras in vehicles. Both types of
cameras can co-operate in the identification of objects on the road. In this way, it is possible to access a
system which allows for distinguishing between authorised objects (such as cleaning and work signals)
and non-authorised objects (such as garbage or even potentially dangerous objects).
Figure 1shows an example in which various devices detect objects in an urban environment.
If both vehicles have cameras, they are able to detect objects based on the patterns they have and
the type of camera. When the vehicles are close to one another in the environment, they are able to
communicate in order to increase the certainty of their observations. In this way, if the driver of the
bike is interested in looking for a waste bin and the electric scooter has recognised the object with more
certainty, the electric scooter will be able to facilitate the location of the waste bin for the driver of the
bike. All the previous tasks must be done in a time range appropriate to each situation; for example,
to not miss the trash or to avoid the traffic cone. The other vehicle can use information integration
to be sure that the object detected is a traffic cone, to know if the cone can be avoided at the current
speed, or if they must decrease their speed to avoid it.
Figure 1.
Urban environment in which various smart devices can collaborate in order to detect objects
more accurately.
At present, the elements of smart cities are increasing their capacity for processing and
communication. Therefore, it is interesting to study how the efficiency of such systems can be improved
when their heterogeneous elements collaborate, in order to integrate the information they have collected.
The aim of this paper is to present the study and implementation of a latency-aware solution to
integrate sensory information, in order to increase certainty in the object recognition process.
Therefore, we propose an architecture that provides the necessary communications system, which
is easily accessed and allows for the integration of information from all types of distributed devices
which can be found in a smart city. These devices may be physical, such as an ultrasonic sensor,
or logical, such as a process that generates an average value using the temperatures obtained from
several physical devices. From the idea of logical devices, the architecture can be structured in different
levels, according to the abstraction of the data that the devices use or generate. At the same time,
the devices may be physically located according to their mutual proximity, which determines the
communication mechanism to be used. In any case, the architecture has to provide sufficient message
latency times for the devices to be capable of adequate decision-making.
To test the architecture, a use-case scenario is presented for object recognition. The objective is to
increase certainty in the recognition of specific objects by integrating information from different
distributed sensors. The idea is to test how the architecture allows for the agile integration of
Sensors 2020,20, 112 3 of 18
heterogeneous information from various devices to improve the recognition of objects, regardless of
the recognition methods used. Better methods may lead to better results, but that is not our goal.
The paper is organised as follows: Once the aim of the investigation has been contextualised,
Section 2presents the related work. Then, Section 3describes the proposed architecture. Section 4
presents the use-case for object recognition and Sections 4.1 and 4.2 describe the system implemented
with the sensors and methods and the scenario in which the system was tested, respectively. Next,
Section 5presents the results of the experiments performed for the recognition of two different objects.
The results obtained verify how the integration of information from the services provided by the smart
resources improves the object detection accuracy. Finally, the conclusions are drawn and some of the
future lines of research to be developed are presented.
2. Related Work
In order to contextualise and focus the present study, this section presents the most-used current
paradigms to design architectures that can be applied in smart cities. Next, we focus on how devices
can provide intelligence to cities by detecting and recognising objects in the environment and the role
of communication between these devices will be discussed. Finally, the context of the present work is
located within some architectures of intelligent cities that use the paradigms presented.
Within the field of architecture of cyber-physical systems [
7
], our work is related to the current
technological trend of ubiquitous and decentralised computational paradigms [
8
]. From the beginning
of artificial intelligence (AI) in the 1950s, how to apply AI in cities has always been one of the main
concerns. Consequently, architectures to manage intelligence in a city is one of the main research fields.
A large number of paradigms that can be used to inspire an architecture for a smart city have been
proposed. These include paradigms such as ubiquitous computing [
9
], Industry 4.0 [
10
], or Internet of
Things (IoT) [11].
The fact that the elements of a cyber-physical system can exchange information, decoupling
their location in the hierarchy of the architecture, has led to the emergence of these paradigms.
These paradigms organise the components in layers according to certain characteristics, such as the
geographical scope, amount of data, or message latency [
12
]. Figure 2shows the different layers
considered by the paradigms. Based on the dimensions proposed in [
12
], the ubiquitous computing,
Industry 4.0, and Internet of Things (IoT) paradigms stand out as very suitable for the design of any
system that provides intelligence to a city.
Figure 2. Decentralised computational paradigms.
IoT has mainly been applied to urban systems which are based on devices with many sensors.
A device with many sensors may be overloaded, or its sensors can be used very sporadically, with a
Sensors 2020,20, 112 4 of 18
consequent loss of efficiency. To reduce the load on each device, it is important to be able to exchange
sensory information between devices. For this, devices have to be able to communicate not only with
devices at their own semantic level, but also with any element of the system.
The Reference Architectural Model Industry 4.0 (RAMI 4.0) [
13
] is based on smart products,
which the ubiquitous computing and IoT [
14
] paradigms place in the edge layer [
15
]. These devices
have a scope of sensorization and action on the order of a few meters, as well as very fast reaction
times. Examples of these products range from robots in an assembly line or smart street lamps, placed
in the same street, which optimise energy consumption. When several devices at the edge level
communicate or interact on a wider level (spatially or temporally), it is called the smart factory or
platform tier. The ubiquitous paradigm places these devices, or processes, in a layer called the fog
layer [
15
]; some examples include when monitoring the performance of all robots in a factory or
when managing the lighting of a neighbourhood in a city. Finally, when the devices are connected to
exchange large amounts of data or when there is a longer-term reaction, it is called a connected world,
the business level, or the well-known concept of the cloud layer [
9
]. The control architectures of smart
cities fit perfectly into these models [16].
The addition of micro-controllers and micro-processors to sensor devices increases the information
capacity that the sensors can provide. These devices are usually called smart or intelligent sensors,
respectively [
17
]. When the sensor includes some advanced processing and (in some cases) actuators,
some authors have called them smart devices [
18
]. Adding a communication interface allows smart
devices to share information and, consequently, increase the knowledge of the environment. The use
of smart devices has grown from environments, like smart cities [
19
], to the concept of smart objects,
where these devices have become part of the daily lives of people [20].
Consequently, the present situation is that sensors can send processed information, rather than
raw data. The result is that sensor networks can form distributed systems that integrate sensor
information in order to take advantage of the processed information [
21
]. When there are different
distributed devices in the network, some interesting problems arise. One of the problems is achieving
a balance between the number of devices used and the correct use of their sensors; that is, when a
new device is introduced, its sensors should help to increase the probability of success when detecting
and recognising an object. Consequently, the composition and connection between the devices will
determine how to recognise the objects. For example, two devices with Red, Green, and Blue (RGB)
sensors will recognise the same texture with a similar probability. However, the probability of success
could increase by using another type of sensor which reinforces the process; for example, a thermal
camera which can distinguish between different ink types.
Sensors can help to understand the environment by detecting objects and some of their
characteristics. However, when the objects detected have to be classified and recognised, a set of
patterns with which to compare [
22
] is necessary. For example, the shape of a box can be detected
by means of a 3D sensor, but the same box can have different textures, so it is also necessary to use
other types of sensors (e.g., an RGB sensor) to recognise what type of box it is. Therefore, using
heterogeneous sensors to detect and recognise the objects present in an environment can increase the
probability of success in recognising the object. When working with heterogeneous sensors, their
information must be merged, usually by remotely creating sensor networks [23].
To monitor public spaces in smart cities, there are some functionalities that depend on object
detection and recognition. The previously mentioned examples are perfect ecosystems in which to use
the fog or edge computation paradigms. The heterogeneity of the devices, places in the edge, and the
possibility to process sensor information allow devices, such as streetlights, to make decisions, such
as deciding whether or not they should be lit, or whether traffic lights adapt their duration to the
length of vehicle queues. The fact that devices are placed in common spaces (e.g., the same street for
streetlights or a common neighbourhood for traffic lights) suggests that fog computing allows for the
creation of federations of nodes or even clusters dependent on the same field of action.
Sensors 2020,20, 112 5 of 18
In summary, acquiring characteristics of the environment to associate them with specific objects
implies a sequence of actions, as shown in Figure 3.
Figure 3.
Overview of the components of the object recognition process in the integration of sensory
information.
The inclusion of object detection in the environment map adds a difficulty and forces the use
of advanced sensors. Consequently, when there are many sensors, the quality of data fusion is
dependent on the fusion mechanism [
24
]. Once a certain precision in the detection of the object and its
characteristics has been achieved, it should be possible to classify the object [
25
]. The classification of
an object requires the use of patterns in order to compare the percentage of similarity [
26
]. Therefore,
in an object recognition system, a database of patterns is necessary.
The different components of the process presented in Figure 3can be located at any of the levels
presented in Figure 2. The sensors belong to the edge, while classification and integration are processes
that can occur at the local device (edge), at any nearby device (fog), or in dedicated servers (cloud).
As a result, communication middleware is required for different types of devices to co-operate and
communicate. This middleware has to allow subscription to specific services, offering a balanced
network load. The publish–subscribe [
27
] paradigm is one of the most suitable, as it allows the physical
decoupling of the devices involved in the communication and the connection of each device to the
information that it is interested in.
Smart city architectures based on the ubiquitous computing, Industry 4.0, or IoT paradigms need
to decide which components have to be placed in the cloud, the fog, or the edge. In [
28
], a cloud-based
architecture has been presented. In this case, some of the data is processed in the cloud-computing
infrastructure, while the edge handles the rest. In [
29
], a multi-layer smart city architecture has been
presented. In this case, the sensors communicate with advanced services after the integration phase,
which occurs directly in the cloud. Of the proposed paradigms, the fog computing model [
30
] seems
reasonable for locating the processing of tasks such as object recognition. This case has been revised
in [
31
]; as a consequence, fog computation seems appropriate to be used as a service in some processes.
The most recently developed processes involve delegating part of the computing to the edge. In [
32
],
an architecture that works at the edge demonstrated efficiency in processing images close to the
devices. Current architectures work mainly with fixed sensors in the city environment. In the case of
our proposed architecture, the sensors are placed in vehicles. These vehicles may coincide in nearby
spaces; in this case, the edge would be the best location for computing. However, considering the fact
that the coincidence time is relatively short, we locate the process of object recognition in the edge and
the process of integration in the fog. This location in the fog, with the possibility of moving to the edge,
is one of the novelties of our proposed architecture. Current architectures offer connection-oriented
communications systems to obtain information in a distributed manner. Another novelty of our
proposed architecture is the use of a latency-aware publish–subscribe communications middleware:
Middleware with latency awareness is required, as vehicles must have a high degree of certainty when
making a decision. fog clustering is based on the topics of the publish–subscribe middleware. These
last novel aspect is an interesting use of topics: When a vehicle is interested in an object type, it can use
a common topic for that object. All vehicles interested in this type of object (e.g., a traffic cone) use
Sensors 2020,20, 112 6 of 18
the same topic. Our work differs from the existing works in the literature, as it considers a dynamic
clustering of nodes at the fog level.
3. Proposed Architecture
In this paper, according to the concepts of cloud, fog, and edge computing [
8
] and the requirement
for elements capable of interacting at all layers, an architecture whose components are based on the
smart resources has been designed (see Figure 4).
Smart resources have been proposed, by the authors of this paper, in previous studies [
33
].
Smart resources (Figure 5) allow high connection flexibility, as the features are offered as services.
The services offered depend on the available sensors and the computing capacity of each smart resource.
Clients determine the necessary services, establishing a connection topology depending on their needs.
For example, in the case of a smart resource that detects and identifies an object with a high probability,
more information to corroborate the identified object may not be required; however, if the probability
is low, the smart resource will need other measurements from other sensors that allow an increase in
probability of successfully identifying the object.
A smart resource is defined as an element of intelligent control that offers capabilities for
interaction with the environment through services. As an intelligent control element, it has a direct
connection to the physical environment through a set of sensors and actuators. In order to carry out
control actions, the smart resource has the functions of acquisition, reactive processing, and action.
Up to this point, a smart resource does not differ from a control node. For example, a traffic light with
a VGA camera, a set of relays to control the light, and an Arduino Microcontroller with a network
connection constitute a control node. The role of the microcontroller is to acquire and transmit images,
as well as to receive orders to turn the lights on or off. However, depending on the processing capacity
of the smart resource and the functionalities it offers, there will be a set of processes with a higher level
of intelligence. If the device mentioned above was provided with a more powerful microprocessor
that allows, for example, the storage of historical data to infer the evolution of traffic or to detect the
number of vehicles waiting, it would then be a smart resource. These advanced features are offered,
through services, to other smart resources or system elements.
Figure 4.
Location of the smart resources in the architecture. Edge interaction is possible when smart
resources are physically in contact; that is, when their operating ranges overlap. Interaction in the fog
allows communication with real-time restrictions between smart resources, without the need to share
physical space. Cloud interaction allows connection to other components and data servers without
real-time restrictions.
Sensors 2020,20, 112 7 of 18
Figure 5.
Concept and components of a smart resource. From interaction with the physical world (
left
)
to interaction with the rest of the system (right).
Communication between smart resources, independent of their location, is provided by a
communications system called CKMultiPeer, which was proposed by the authors of this paper in [
34
].
CKMultiPeer is based on the data distribution service (DDS) model, which has been proposed for use
in mixed-criticality distributed systems [35].
Integration of sensory information is produced along all levels of the architecture, providing
a layer of sensory fusion to enrich the semantic meaning of the information provided to other
components. The integration of sensory information is based on the enrichment of the semantic
meaning of the information, depending on the architecture level at which it is integrated: The higher
the architectural level, the more semantic meaning. For example, at a lower level of the architecture,
all the values of a temperature sensor can be provided but, at a higher level, the information could
be the average of these values and its comparison with the values of other temperature sensors.
Sensory information is transparently shared by the components of the architecture from their location.
The components may be close or located in the cloud.
CKMultiPeer allows connections between smart resources at both the cloud and fog levels, or even
at the edge level. In this last case, smart resources are physically very close (direct contact at the edge)
and the communication channels used are specific (such as Bluetooth) or by direct connection (such as
I2C).
The importance of having mechanisms for semantic information conversion has been previously
discussed. For example, when a large number of calculations are required to obtain a daily average
over a historical archive of temperature samples or to predict a trend with a time horizon of one day.
These semantic conversions require knowledge (for example, the temperature history found in the
cloud) and some data found directly within the smart resources. The element that allows semantic
conversions between the fog and the cloud is called the Semantic Gateway. The Semantic Gateway
acts as a broker which can provide information to the edge.
Connections in the edge are possible when two smart resources are in the same physical space
or operating range. The operating range is defined as the physical space where the sensors and
actuators of a smart resource can interact. For example, when a person rides a bicycle, the sensors
of the bicycle can connect and collaborate with the sensors on the person; for example, the position
sensors in the mobile device of a cyclist can collaborate with a camera installed on the bicycle to
transmit the route or to recognise objects. This collaboration is interesting, because the same device
can collaborate with other devices during different intervals of time. Therefore, a communication
Sensors 2020,20, 112 8 of 18
protocol that allows the collaboration between heterogeneous devices is necessary. Figure 6presents
the proposed service-oriented protocol.
Figure 6.
Communication protocol diagram between two smart resources at the edge level
(edge.SR.link).
The diagram in Figure 6is located in the application level. When two smart resources have been
connected, both offer their services by exchanging a JSON message. A smart resource i offers its
services to another smart resource j. When smart resource j requires a service of smart resource i, it will
request it and smart resource i will generate a response with the result or information provided by the
service; this response may include an action, as well.
At the fog level, communication through CKMultipeer is done through topics. A topic is a common
communication space in which information can be published. The element that publishes the information
writes the data in a specific topic. The elements that wish to know the information of the topic subscribe
to the topic and, when the information changes, they receive a message with the new data.
The communication protocol between two smart resources using CKMultipeer at the fog level is
shown, in detail, in Figure 7.
Figure 7.
Communication protocol diagram between the CKMultipeer broker and smart resources at
the fog level (fog.SR.link).
The communication protocol at the fog level is based on the publish–subscribe paradigm.
The exchange of services is carried out through topics and the smart resources interact in a decoupled
manner. First, the smart resources request, at the fog level, the list of available Topics through the
CKMultipeer broker. As a result, they receive a JSON file with the list of services offered, features,
and service quality parameters [
36
]. When a smart resource requires a service, it subscribes to the
associated topic through CKMultipeer. CKMultipeer is responsible for automatically sending all
the information published in the topics to the relevant subscribers. When the smart resource which
Sensors 2020,20, 112 9 of 18
provides the required service (e.g., smart resource i in Figure 7) publishes new information in the
associated topic, CKMultipeer notifies and sends the new information to all subscribed smart resources.
Subscribers asynchronously read the information; that is, by means of a notification model based on
events. When the smart resource does not need to receive more information, it can unsubscribe from
the corresponding topic. Additionally, smart resources can read any topic without any subscription by
means of a blocking operation.
4. Use-Case: Object Recognition
A use-case for object recognition was tested to validate the architecture. smart resources perform
the functions of detecting objects using their own sensors, further enhancing the object certainty using
information that they obtain from other smart sensors.
As described in Section 2, the classification and integration steps imply the use of patterns,
where the decision is based on the probabilities of each pattern provided from the different sensors.
The process is described in Figure 8.
Figure 8.
smart resource scheme and connection with patterns for object recognition based on the
analysis of the similarity between measurements and patterns.
Taking into account that each object j has a specific pattern from each type of sensor i, a pattern is
built with the object characteristics detected from a type of sensor. If the whole process is centralised,
each device should have access to as many sensors as possible, as well as the patterns to compare
with those sensors. However, the storage load of all the patterns and the processing load of all the
sensors in the same device may be too much. In addition, in the case that a sensor obtains a very high
probability with a specific object pattern, it is not necessary to continue processing data from more
sensors, unless 100% certainty is required. Therefore, a distributed system can provide an adequate
and efficient solution [
37
]. In this system, a device should request more results from other devices
which have recognised an object only if it has a low certainty in the recognition of the object. Thus,
a device only needs to have the patterns of the sensors used; moreover, it should be able to consult
another device about an object, in order to reinforce the probability of the recognised object.
smart resources offer, as services, their position in the map and the probability detected for each
pattern, among others.
Sensors 2020,20, 112 10 of 18
The communication system allows a device to connect to a source of information (e.g., the pattern
of a specific object), from which it obtains data that can reinforce the identification of a specific object.
4.1. System Implemented
The implemented system is shown in Figure 9. Two Turtlebot [
38
,
39
] robots were used to carry
the smart resources. Each smart resource was composed of one BeagleBone [
40
,
41
], the corresponding
sensors, and one IEEE802.11 interface to allow communication between them. In the experiments
performed, real-world vehicles were replaced by robots, where Turtlebot 1 carried Smart Resource 1
and Turtlebot 2 carried Smart Resource 2. Using well-known and controllable robots, the experiments
can be replicated with vehicles having similar behaviour and movement.
Figure 9.
The case study used in the experiments. Vehicles are replaced by autonomous robots and
street objects are replaced by boxes. The robots can be controlled better than a real bike or scooter,
which allows the experiment to be replicated.
Figure 10 shows the details of the implemented smart resources. The first smart resource used
had two sensors, a depth camera [
42
] to detect the three-dimensional geometry, and an RGB camera to
detect the two-dimensional texture. The second smart resource had only one sensor, a thermal camera
that produced an RGB image associated to the colour reflected. The colour of the image depended on
the temperature and was directly associated with the ink composition or box content.
Figure 10.
Details of the system implemented to perform the experiments and the corresponding step
associated with the component (top of the figure).
Sensors 2020,20, 112 11 of 18
The reason for using different RGB sensors (conventional vision and thermal vision, both 2D) was to
be able to use the same recognition algorithms (2D image), but with different patterns of the same object.
The acquisition step used a triple buffer [
43
] implementation which ensured that the sensor always
had fresh data available without interfering with the classification process. The raw data classification
process was based on the work presented in [
44
], which happens in three different steps: segmentation,
blob detection, and feature recognition. First of all, the segmentation process allowed the device to
extract the same colour and depth regions from the raw image. In the next step, some of these regions
were grouped, forming image blobs using the seed region growing (SRG) technique [
45
]. Finally some
shape, size, density, and colour characteristics were analysed to recognise some environment features,
according to a set of available patterns, by means of a reliability-based particle filter. The integration was
carried out by means of a Bayesian fusion algorithm with reinforcement associated with the different
features provided by the different sensors. This algorithm has been described, in detail, in [46].
4.2. Experiment Performed
The objective of the experiment was to characterise the performance of the presented architecture.
The experiments that were performed evaluated the obtained results in both single- and multi-robot
approaches. While the single robot experiments offered information for characterising the access to
on-board smart devices, the multi-robot approach demonstrated how to deal with spatially decoupled
sensors. In order to provide these statistical values, a set of environmental features were recognised and
integrated as environmental knowledge. This set included heterogeneous object samples (Figure 11),
which were representative for testing.
Figure 11.
Objects (boxes) used in the experiments. Other objects are used to measure the false
positive rate.
Two specific objects (the boxes in the centre of Figure 11) were proposed for detection and
recognition by means of the two different smart resources. Both objects had the same geometry (boxes)
but different textures (the box of a Xtion and the box of a BeagleBone).
The patterns used to recognize the textures were the images of the six sides of each face of the
boxes to be detected. Both smart resources contained both the texture (images) and geometry (3D
shape) patterns of the two boxes. Therefore, the matching pattern sensor measurement was performed
in both smart resources. Figure 12 shows the processes that were carried out based on smart resources
and boxes.
Sensors 2020,20, 112 12 of 18
Figure 12.
The objects (boxes) used in the experiments, sensors in the smart resources, and patterns
used to recognise the boxes.
The experiment started when the two robots found the box. First, the robots were tested with
the Beaglebone box; then, the Xtion box was used. When Smart Resource 1 detected a box with a
reasonable prospect of certainty (greater than 0.5), it published the position of the estimated box, time,
and certainly value in one of the topics ‘BBBox’ or ‘XtionBox’. A topic is a common space to share
data in a publish–subscribe system. Smart Resource 2 then received the data of the certainty of both
boxes and integrated the information with the data obtained from its sensors. The transmission time
between both smart resources was also considered, as shown in Figure 13.
Figure 13.
Data path of the experiment performed. From the data acquisition (
left
), by means of the
smart resources sensors, to the result obtained from the integration of the information (right).
At the bottom of Figure 13, the times taken by each smart resource to classify an object are shown.
The time
ta
refers to the acquisition time of the images by the cameras used. In the experiments,
it remained constant depending on the sensor used.
Sensors 2020,20, 112 13 of 18
The time
tcl
is the time it took to classify the images, according to the available patterns. In order
not to alter the experiments, the patterns were already pre-loaded in each intelligent resource. When a
pattern was not available, the smart resource had to request it, either from the fog through CKMultipeer
or from the cloud.
The time
tf
is the time it took to fuse various results. An extended Kalman filter (EKF), which
has been commonly used in similar environments, was used [
47
]. It is important that, once the object
detected by the camera was classified, the percentage of certainty was used as the inverse error (i.e.,
higher certainty implies lower error) in the measurement of a sensor; that is, when comparing an image
with a pattern, for example the BBBox box, it was considered to be a BBBox box by the sensor with a
specific percentage of certainty. Integration may provide an increase in certainty in the recognition of an
object but, for integration, the smart resources must communicate. The time
tc
is the communications
time. A 54 Mbps WiFi network was used for CKMultipeer.
The time
ti
is the time it took to integrate the results. A summary of the average times obtained is
shown in the results section.
5. Results
5.1. Latency Time
The average message latency times were obtained using the protocol presented in the previous section.
Figure 13 shows the total time taken to obtain the final result of information integration; that
is, the addition of all times involved in the data path. Times were taken from processes running
independently. Table 1shows the estimated process times, based on the decoupled tasks and results.
In the case that the smart resource had two sensors (turtlebot 1), the common times (e.g.,
ta
or
tcl
)
considered were the worst of all sensor times.
Table 1.
Results of the average estimated times in each stage of the process. The acronym “n.a.” (not
appropriate) means that time is not provided because that phase was not done in the experiment.
Object SR tatcl tftctiTotal
BBox turtlebot 1 16 ms. 12 ms. 25 ms. n.a. n.a. 53 ms.
turtlebot 2 32 ms. 18 ms. n.a. n.a. n.a. 50 ms.
Integration (fog) n.a. n.a. n.a. 244 ms. 23 ms 320 ms.
XtionBox turtlebot 1 16 ms. 15 ms. 24 ms. n.a. n.a. 55 ms.
turtlebot 2 32 ms. 17 ms. n.a. n.a. n.a. 49 ms.
Integration (fog) n.a. n.a. n.a. 244 ms. 25 ms. 318 ms.
It should be noted that the process times were similar, due to both smart resources using the
same libraries, microcontroller boards, and the boxes to detect were quite similar. It can be seen that
CKMultipeer introduced high latency. To check whether information integration was profitable, it is
necessary to study if the increased latency time required to increase object recognition can justify
the percentage of certainty improved. The ratio obtained using local integration and collaborative
integration is studied in the next subsection.
5.2. Object Recognition Certainty
In the proposed scenario, the two robots (Turtlebot 1 and Turtlebot 2) navigated until they detected
the same set of objects. Both robots had a different perspective and were located correctly on the map.
To show the process better, the results of each detected object are shown separately.
Table 2shows the results obtained when the data from the sensors of the first robot (Turtlebot 1)
are compared with the geometries of the boxes. It can be seen that both were very similar, with a
certain difference favourable to the BeagleBone box. In the case of texture, the RGB sensor had a clear
tendency to detect the correct object.
Sensors 2020,20, 112 14 of 18
Table 2.
Results of certainty for correct object detection applying the integration method with two
similar objects (BeagleBone box).
Object: BeagleBone box Turtlebot 1 Turtlebot 2
Object Pattern Geometry Texture Fusion Texture Integration
BeagleBone box 0.726 0.671 0.792 0.789 0.824
Asus Xtion box 0.647 0.127 0.651 0.192 0.658
As can be seen from Table 2, when Smart Resource 1 requested the system (through the CKMultiPeer
topics) with the certainty of the object, the correct object was always reinforced. Furthermore, the data
in the opposite direction, when the integration was done by Smart Resource 1 and the information
of certainty was provided by Smart Resource 2, were similar. Consequently, it is possible that two
uncoupled, heterogeneous systems can collaborate to improve their perception of objects.
Turtlebot 2 also detected the two objects, but only by means of texture. Consequently, the smart
resource of the Turtlebot 2 requested the texture service to the system and, upon receiving the data,
the correct object was reinforced. When merging the information, the object recognized as the BeagleBone
box by Turtlebot 2 was reinforced much more than the Xtion box object (Table 3). In the case of the
second object, the same trend can be observed, but it was the Xtion box that was reinforced.
Table 3.
Results of certainty for correct object detection applying the integration method with two
similar objects (XTion box).
Object: Xtion box Turtlebot 1 Turtlebot 2
Object Pattern Geometry Texture Fusion Texture Integration
BeagleBone box 0.243 0.231 0.253 0.210 0.259
Asus Xtion box 0.851 0.712 0.886 0.812 0.902
However, in all cases, the certainty of the wrong object also increased. This was an expected
behaviour, due to the algorithm used for the integration, which should not be a problem as the certainty
of the incorrect object increased in a lesser proportion than the certainty of the correct object. However,
the integration algorithm could be modified to correct the certainty, in such cases.
5.3. Information Integration
To understand the improvements that the fog brings, edge accuracy was compared with the accuracy
obtained by information integration of the two smart resources (Table 4). In Table 4, the optimization
value was obtained by dividing the best edge accuracy by the fog accuracy obtained from the information
integration process. The cost of integration was obtained by dividing optimization by latency, which
indicates how many milliseconds we needed to spend to increase the accuracy by one percent.
Table 4.
Comparison results between edge accuracy and fog accuracy integration (only done by Smart
Resource 2).
Local (smart resource) fog Integration
Object SR Accuracy Latency Accuracy Latency Optimization Integration Cost
BBBox 1 0.792 53 ms.
2 0.789 50 ms. 0.824 320 ms. 4% 79.2 ms
xTionbox 1 0.886 55 ms.
2 0.812 49 ms. 0.902 318 ms. 2% 176.1 ms.
From the results presented above, it can be seen that the local processing of information was less
expensive in time than that of information integration. On average, information integration spent six
Sensors 2020,20, 112 15 of 18
times the amount of time for classification and subsequent local integration. However, information
integration provided an improvement in the certainty of object recognition. With the results obtained,
the amount of additional time needed to improve a low percentage in recognition was high.
6. Conclusions
The proposed system allows a smart resource to decide whether the percentage of local certainty
in recognizing an object is enough to make a decision, or if it is better to employ information integration
using other smart resources to reinforce the recognition certainty. For example, if a user is looking for a
specific object, such as a waste bin, it is probably enough to recognize it with low certainty and wait for
the vehicle to recognize another with a higher percentage; after all, there are many waste bins in a city.
However, if the object recognized with little certainty is a traffic cone, which indicates a problem or a
potential accident, it is better for the vehicle to decrease their speed and ask nearby smart resources for
more information and integrate it, in order to increase the certainty rate and decide between avoiding
an obstacle or stopping the vehicle to avoid a collision.
The system used and experiments performed are placed in the edge and fog levels. The fog level
implies the use of a common communication channel between all smart resources; this communication
channel must manage the connections between smart resources and third parties. The use of a
common communication channel provides the smart resources transparent and decoupled access to
information. If two (or more) smart resources need to be coupled (for example, to share high-speed
or high-frequency information), it is necessary to use a communication channel close to the edge
level. This communications channel can be oriented to a physical connection, such as I2C, or wireless
connection, such as Bluetooth.
In the presented use-case, the cloud was not considered. This was because this level manages large
amounts of complex data and, consequently, involves higher computational times. The cloud is used,
mostly, to store new patterns or update existing ones. As such, it can be used as a repository, allowing
smart resources to have more detection power and locally adapt the patterns to different environments.
The cloud allows smart resources to push the power limits of microprocessor computation and storage.
The combination of sensors, actuators, micro-controllers, and communications interfaces allows
smart cities to be implemented by means of distributed intelligent systems. This paper proposes a
general latency-aware architecture to integrate sensor information, which is focused on the results of
increasing certainty in the recognition of objects. Due to of the large number and variety of sensors
which exist in smart cities, it is convenient to organize them into devices that can interact with each other.
Increased accuracy in object recognition is based on node collaboration by integrating information from
nearby nodes. The experiments carried out to verify the integration of the information increased the
certainty and, consequently, the success in object detection and recognition.
Based on the results, it is possible to apply information integration in smart cities as a method to
improve the services offered by different elements. The proposed system, with two image processing
devices, serves as a proof-of-concept case. The use-case had only two vehicles and integrated three
sensors, which was useful to validate the idea but suggests that research in new scenarios can clarify
the relationships between optimisation and latency. Constructing use-cases with more sensors and
vehicles will allow a device to make decisions about whether waiting for the integration process from
other devices is necessary to improve the optimisation, or if the device can work with self-assurance.
The results obtained showed only a small increase in certainty. However, this does not detract from the
validity of the proposed architecture. Different kinds of sensors and detection or fusion algorithms may
obtain different results, which may present better outcomes, as they do not depend on the proposed
architecture. Furthermore, in this work, we assumed that the devices could not reach each other
directly without going through the fog layer (using the publish–subscribe paradigm), as our vehicles
were in motion. However, as future work, this use-case could be interesting to investigate, as it could
reduce the latency in other scenarios. For example, it is interesting to consider the case when the
devices can reach each other directly without going through the fog layer. This use-case would be
Sensors 2020,20, 112 16 of 18
interesting to investigate with a dynamic layer placement of the devices (i.e., from fog to edge and
vice versa), which may be useful in deciding the best layer to use, depending on the desired certainty
increase or the latency decrease.
Author Contributions:
conceptualization, J.-E.S.-T., J.-L.P.-Y. and J.-L.P.-L.; methodology, J.-L.P.-L.; software,
J.-E.S.-T. and J.-L.P.-L.; validation, J.-E.S.-T. and F.B.; formal analysis, J.-L.P.-Y. and J.-L.P.-L.; investigation, J.-E.S.-T.,
J.-L.P.-Y. and J.-L.P.-L.; resources, J.-E.S.-T., J.-L.P.-Y., F.B. and J.-L.P.-L.; data curation, J.-L.P.-Y. and J.-L.P.-L.;
writing—original draft preparation, J.-L.P.-Y. and J.-L.P.-L.; writing—review and editing, J.-L.P.-Y. and J.-L.P.-L.;
visualization, J.-L.P.-L. and J.-L.P.-Y.; supervision, J.-E.S.-T. and F.B.; project administration, J.-E.S.-T. and F.B.;
funding acquisition J.-E.S.-T. All authors have read and agreed to the published version of the manuscript.
Funding:
This research was funded by the Spanish Science and Innovation Ministry grant number MICINN: CICYT
project PRECON-I4: “Predictable and dependable computer systems for Industry 4.0” TIN2017-86520-C3-1-R.
Conflicts of Interest:
The authors declare no conflict of interest. The founders had no role in the design of the
study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to
publish the results.
Abbreviations
The following abbreviations are used in this manuscript:
DDS Data Distribution Service
IoT Internet of Things
RAMI 4.0 Reference Architectural Model Industrie 4.0
RGB Red, Green, Blue
SR smart resource
References
1.
Munera, E.; Poza-Lujan, J.L.; Posadas-Yagüe, J.L.; Simó-Ten, J.E.; Noguera, J.F.B. Dynamic Reconfiguration of
a RGBD Sensor Based on QoS and QoC Requirements in Distributed Systems. Sensors
2015
,15, 18080–18101,
doi:10.3390/s150818080.
2.
Roscia, M.; Longo, M.; Lazaroiu, G.C. Smart City by multi-agent systems. In Proceedings of the 2013
International Conference on Renewable Energy Research and Applications (ICRERA), Madrid, Spain, 20–23
October 2013; pp. 371–376.
3.
Ström, D.P.; Nenci, F.; Stachniss, C. Predictive exploration considering previously mapped environments.
In Proceedings of the 2015 IEEE International Conference on Robotics and Automation (ICRA), Seattle, WA,
USA, 26–30 May 2015; pp. 2761–2766.
4.
Cao, J.; Song, C.; Peng, S.; Xiao, F.; Song, S. Improved traffic sign detection and recognition algorithm for
intelligent vehicles. Sensors 2019,19, 4021.
5.
García, C.G.; Meana-Llorián, D.; G-Bustelo, B.C.P.; Lovelle, J.M.C.; Garcia-Fernandez, N. Midgar: Detection
of people through computer vision in the Internet of Things scenarios to improve the security in Smart Cities,
Smart Towns, and Smart Homes. Future Gener. Comput. Syst. 2017,76, 301–313.
6.
Guerrero-Gómez-Olmedo, R.; López-Sastre, R.J.; Maldonado-Bascón, S.; Fernández-Caballero, A. Vehicle
tracking by simultaneous detection and viewpoint estimation. In Proceedings of the International
Work-Conference on the Interplay Between Natural and Artificial Computation, Mallorca, Spain, 10–14 June
2013; Springer: Berlin/Heidelberg, Germany, 2013; pp. 306–316.
7.
Shi, J.; Wan, J.; Yan, H.; Suo, H. A survey of cyber-physical systems. In Proceedings of the 2011 International
Conference on Wireless Communications and Signal Processing (WCSP), Nanjing, China, 9–11 November
2011; pp. 1–6.
8.
Escamilla-Ambrosio, P.; Rodríguez-Mota, A.; Aguirre-Anaya, E.; Acosta-Bermejo, R.; Salinas-Rosales, M.
Distributing Computing in the internet of things: cloud, fog and edge computing overview. In NEO 2016;
Springer: Berlin/Heidelberg, Germany, 2018; pp. 87–115.
9.
Mell, P.; Grance, T. The NIST Definition of cloud Computing; Special Publication 800-145; National Institute of
Standards and Technology: Gaithersburg, MD, USA, 2011.
10.
Lu, Y. Industry 4.0: A survey on technologies, applications and open research issues. J. Ind. Inf. Integr.
2017
,
6, 1–10.
Sensors 2020,20, 112 17 of 18
11. Li, S.; Da Xu, L.; Zhao, S. The internet of things: A survey. Inf. Syst. Front. 2015,17, 243–259.
12.
Zdraveski, V.; Mishev, K.; Trajanov, D.; Kocarev, L. ISO-standardized smart city platform architecture and
dashboard. IEEE Pervasive Comput. 2017,16, 35–43.
13.
Hankel, M.; Rexroth, B. The Reference Architectural Model Industrie 4.0 (rami 4.0); ZVEI: Frankfurt am Main,
Germany, April 2015.
14.
Dastjerdi, A.V.; Buyya, R. fog computing: Helping the Internet of Things realize its potential. Computer
2016
,
49, 112–116.
15.
Tseng, M.; Canaran, T.; Canaran, L. Introduction to edge Computing in IIoT; Industrial Internet Consortium:
Needham, MA, USA, 2018; pp. 1–19.
16.
Zanella, A.; Bui, N.; Castellani, A.; Vangelista, L.; Zorzi, M. Internet of things for smart cities. IEEE Internet
Things J. 2014,1, 22–32.
17. Yurish, S.Y. Sensors: Smart vs. intelligent. Sens. Transducers 2010,114, I–VI.
18.
Poslad, S. ubiquitous Computing: Smart Devices, Environments And Interactions; John Wiley & Sons: Hoboken,
NJ, USA, 2011.
19.
Hancke, G.P.; Silva Bde, C.; Hancke, G.P., Jr. The role of advanced sensing in smart cities. Sensors
2012
,
13, 393–425.
20.
Lazar, A.; Koehler, C.; Tanenbaum, J.; Nguyen, D.H. Why we use and abandon smart devices. In Proceedings
of the 2015 ACM International Joint Conference on Pervasive and ubiquitous Computing, Osaka, Japan,
7–11 September 2015; pp. 635–646.
21.
Chen, Y. Industrial information integration—A literature review 2006–2015. J. Ind. Inf. Integr.
2016
,2, 30–64.
22.
Lim, G.H.; Suh, I.H.; Suh, H. Ontology-based unified robot knowledge for service robots in indoor
environments. IEEE Trans. Syst. Man Cybern.-Part A Syst. Hum. 2011,41, 492–509.
23.
Zhang, J. Multi-source remote sensing data fusion: Status and trends. Int. J. Image Data Fusion
2010
,1, 5–24.
24.
Deng, X.; Jiang, Y.; Yang, L.T.; Lin, M.; Yi, L.; Wang, M. Data fusion based coverage optimization in
heterogeneous sensor networks: A survey. Inf. Fusion 2019,52, 90–105, doi:10.1016/j.inffus.2018.11.020.
25.
Azim, A.; Aycard, O. Detection, classification and tracking of moving objects in a 3D environment.
In Proceedings of the 2012 IEEE Intelligent Vehicles Symposium (IV), Alcala de Henares, Spain, 3–7 June
2012; pp. 802–807.
26.
Jain, A.K.; Duin, R.P.; Mao, J. Statistical pattern recognition: A review. IEEE Trans. Pattern Anal. Mach. Intell.
2000,22, 4–37.
27.
Eugster, P.T.; Felber, P.A.; Guerraoui, R.; Kermarrec, A.M. The many faces of publish/subscribe. ACM Comput.
Surv. (CSUR) 2003,35, 114–131.
28.
Adam, M.S.; Anisi, M.H.; Ali, I. Object tracking sensor networks in smart cities: Taxonomy, architecture,
applications, research challenges and future directions. Futur. Gener. Comput. Syst.
2017
,
doi:10.1016/j.future.2017.12.011.
29.
Gaur, A.; Scotney, B.; Parr, G.; McClean, S. Smart city architecture and its applications based on IoT.
Procedia Comput. Sci. 2015,52, 1089–1094.
30.
Iorga, M.; Goren, N.; Feldman, L.; Barton, R.; Martin, M.; Mahmoudi, C. Fog Computing Conceptual Model;
Technical Report; National Institute of Standards and Technology: Gaithersburg, MD, USA, 2018.
31.
Byers, C.C. Architectural imperatives for fog computing: Use cases, requirements, and architectural
techniques for fog-enabled iot networks. IEEE Commun. Mag. 2017,55, 14–20.
32.
Dautov, R.; Distefano, S.; Bruneo, D.; Longo, F.; Merlino, G.; Puliafito, A.; Buyya, R. Metropolitan intelligent
surveillance systems for urban areas by harnessing IoT and edge computing paradigms. Softw. Pract. Exp.
2018,48, 1475–1492.
33.
Rincon, J.; Poza-Lujan, J.L.; Julian, V.; Posadas-Yagüe, J.L.; Carrascosa, C. Extending MAM5 meta-model and
JaCalIV E framework to integrate smart devices from real environments. PLoS ONE 2016,11, e0149665.
34.
Simó-Ten, J.E.; Munera, E.; Poza-Lujan, J.L.; Posadas-Yagüe, J.L.; Blanes, F. CKMultipeer: Connecting
Devices Without Caring about the Network. In Proceedings of the International Symposium on Distributed
Computing and Artificial Intelligence, Porto, Portugal, 21–23 June 2017; Springer: Berlin/Heidelberg,
Germany, 2017; pp. 189–196.
35.
Tijero, H.P.; Gutiérrez, J.J. Development of Real-Time and Mixed Criticality Distributed Systems through the
DDS Standard. Rev. Iberoam. De Automática E Informática Ind. 2018,15, 439–447, doi:10.4995/riai.2017.9000.
Sensors 2020,20, 112 18 of 18
36.
Poza, J.L.; Posadas, J.L.; Simó, J.E. From the queue to the quality of service policy: A middleware
implementation. In Proceedings of the International Work-Conference on Artificial Neural Networks,
Salamanca, Spain, 10–12 June 2009; Springer: Berlin/Heidelberg, Germany, 2009; pp. 432–437.
37.
Amurrio, A.; Azketa, E.; Javier Gutierrez, J.; Aldea, M.; Parra, J. A review on optimization techniques for the
deployment and scheduling of distributed real-time systems. RIAI Rev. Iberoam. De Autom. E Inform. Ind.
2019,16, 249–263.
38.
Garage, W. Turtlebot. 2011; pp. 11–25. Available online: http://turtlebot.com (accessed on 30 October
2019).
39.
Rogers, J.G., III; Nieto-Granda, C.; Christensen, H.I. Coordination strategies for multi-robot exploration and
mapping. In Experimental Robotics; Springer: Berlin/Heidelberg, Germany, 2013; pp. 231–243.
40. Coley, G. Beaglebone Black System Reference Manual; Texas Instruments: Dallas, TX, USA, 2013.
41.
Chianese, A.; Piccialli, F.; Riccio, G. Designing a smart multisensor framework based on beaglebone black
board. In Computer Science and its Applications; Springer: Berlin/Heidelberg, Germany, 2015; pp. 391–397.
42.
Chen, L.; Wei, H.; Ferryman, J. A survey of human motion analysis using depth imagery.
Pattern Recognit. Lett. 2013,34, 1995–2006.
43.
Tagami, Y.; Watanabe, M.; Yamaguchi, Y. Development Environment of 3D Graphics Systems. Fujitsu Sci.
Tech. J. 2013,49, 64–70.
44.
Munera Sánchez, E.; Muñoz Alcobendas, M.; Blanes Noguera, J.; Benet Gilabert, G.; Simó Ten, J.
A reliability-based particle filter for humanoid robot self-localization in RoboCup Standard Platform League.
Sensors 2013,13, 14954–14983.
45. Adams, R.; Bischof, L. Seeded region growing. IEEE Trans. Pattern Anal. Mach. Intell. 1994,16, 641–647.
46.
Sánchez, E.M. Mission-Oriented Heterogeneous Robot Cooperation Based on Smart Resources Execution.
Ph.D. Thesis, Universitat Politècnica de València, Valencia, Spain, 2017.
47.
Chow, J.; Lichti, D.; Hol, J.; Bellusci, G.; Luinge, H. Imu and multiple RGB-D camera fusion for assisting
indoor stop-and-go 3D terrestrial laser scanning. Robotics 2014,3, 247–280.
c
2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access
article distributed under the terms and conditions of the Creative Commons Attribution
(CC BY) license (http://creativecommons.org/licenses/by/4.0/).
... Scholars and practitioners [6][7][8][9] have started to analyze relationships between Industry 4.0 technologies and environmental management. In the manufacturing sector specifically, some companies are implementing solutions based on smart sensors and AI, especially to improve energy consumption, while others are focusing on additive manufacturing to conserve and reuse resources [3]. ...
... In this way, by intelligently integrating devices, in construction, industrial, or residential environments, it is possible to approach the concept of a smart grid. This concept, associated with the use of smart sensors, is discussed in [9,40,[71][72][73][74][75]. The works present methods and examples that relate the use of smart sensors to the concept of and application in smart grids and smart cities. ...
Article
Full-text available
This paper proposes the use of the AHP-Gaussian method to support the selection of a smart sensor installation for an electric motor used in an escalator in a subway station. The AHP-Gaussian methodology utilizes the Analytic Hierarchy Process (AHP) framework and is highlighted for its ability to save the decision maker’s cognitive effort in assigning weights to criteria. Seven criteria were defined for the sensor selection: temperature range, vibration range, weight, communication distance, maximum electric power, data traffic speed, and acquisition cost. Four smart sensors were considered as alternatives. The results of the analysis showed that the most appropriate sensor was the ABB Ability smart sensor, which scored the highest in the AHP-Gaussian analysis. In addition, this sensor could detect any abnormalities in the equipment’s operation, enabling timely maintenance and preventing potential failures. The proposed AHP-Gaussian method proved to be an effective approach for selecting a smart sensor for an electric motor used in an escalator in a subway station. The selected sensor was reliable, accurate, and cost-effective, contributing to the safe and efficient operation of the equipment.
... Un dispositivo inteligente, por tanto, es un dispositivo que, por medio de servicios, proporciona información y capacidad de actuación en un rango de operación. Cuando un dispositivo inteligente ofrece su operatividad, es decir la de los módulos de control que lo componen, en forma de servicios, se habla de recursos inteligentes, ampliamente tratados en (Poza-Lujan et al., 2020). El recurso inteligente proporciona unos servicios que se forman combinando las funcionalidades ofrecidas por los módulos de control que lo componen. ...
Article
Full-text available
La gestión de la movilidad de personas y vehículos es un aspecto de continuo estudio debido a la relevancia que tiene en la contribución a la polución. El control de los semáforos determina las colas que en los cruces se pueden formar. Habitualmente este control no está adaptado al tráfico existente en un momento concreto, dado que la adaptación implica conocer los peatones y vehículos que se encuentran circulando en cada momento. Para resolver este problema, en el artículo se propone el uso de unos dispositivos inteligentes modulares que permiten detectar los vehículos y cambiar los tiempos de acceso al cruce dependiendo de las circunstancias. Para validar el sistema se ha realizado una simulación generando cargas en MatLab y simulando el control con Simulink. Se ha simulado un ciclo de semáforo con tiempos fijos y se ha comparado con ciclos de tiempos variables en función de la carga de peatones y de vehículos. En el artículo se proponen los indicadores Op y Sat como método de medición de la optimización del algoritmo de control sobre el estado del cruce. Por medio de dichos indicadores se ha comprobado que en el mejor de los casos es posible optimizar en un 50 % el tiempo de espera de forma casi independiente de la carga de tráfico.
... On the other hand, the sustainable management of water resources is gaining more attention in terms of energy consumption, enabling smart water management platforms for intelligent water consumption measurement and crisis detection [126]. The integration of heterogeneously distributed information enables the identification of object architectures in smart environments, in which information is used for particular processes, such as reconstructing environmental maps of the smart city or for the intelligent navigation of vehicles [127]. Figure 17 illustrates 11 co-occurring keywords for automation control systems, with the central keywords being IoT and Industry 4.0. ...
Article
Full-text available
At present, the smart city offers the most desired state of urban development, encompassing, as it does, the concept of sustainable development. The creation of a smart city is closely associated with upgrading the construction industry to encompass many emerging concepts and technologies, such as Construction 4.0, with its roots in Industry 4.0, and the deployment of building information modeling (BIM) as an essential tool for the construction industry. Therefore, this paper aims to explore the current state of the art and development trajectory of the multidisciplinary integration of Construction 4.0, Industry 4.0, BIM, and sustainable construction in the context of the smart city. It is the first attempt in the literature to use both macro-quantitative analysis and micro-qualitative analysis methods to investigate this multidisciplinary research topic. By using the visual bibliometric tool, VOSviewer, and based on macro keyword co-occurrence, this paper is the first to reveal the five keyword-constructed schemes, research hotspots, and development trends of the smart city, Construction 4.0, Industry 4.0, BIM, and sustainable construction, from 2014 to 2021 (a period of eight years). Additionally, the top 11 productive subject areas have been identified with the help of VOSviewer software keyword-clustering analysis and application. Furthermore, the whole-building life cycle is considered as an aid to identifying research gaps and trends, providing suggestions for future research with the assistance of an upgraded version of BIM, namely, city information modeling (CIM) and the future integration of Industry 5.0 and Construction 5.0, or even of Industry Metaverse with Construction Metaverse.
Chapter
In this chapter, we describe how to adopt different software components based on the previously presented conceptual solution and generate an integrating digital twin for a business workflow as a service extension. The focus of this software system lies in the implementation of methods from computer vision that can reliably recognize the existing objects in a process plant with their structure and interconnections. This intention benefits from numerous methods which were developed in the past years to tackle the challenge of the recognition of 3D objects under various conditions. A brief review of such methods is presented here, in particular concerning difficult environmental impact (vapor, dust, smoke, darkness, and dirt). Methods are distinguished according to data input: image, point cloud, or video. For several reasons, the implementation of an automatic object recognition procedure is realized by using existing convolutional neural networks (CNN frameworks), also known as deep learning. Literature review shows that effective and versatile automation capabilities of deep learning combined with large-scale processing may be an adequate means for the challenges of the extent and complexity of a process plant. Further is the recognition procedure described in more detail, in particular how the piping system is built up in its full complexity coming from singular components. This recognition runs iteratively in four steps with a mutual interdependence. The entire point cloud is processed by segmentation, where the pipe system is extracted. In two subsequent steps (clustering and classification) the point cloud is subdivided further to recognize the singular parts. While piping systems consist of forked structures with complex configurations and partial occlusion, the exact recognition of piping centerlines is conducted based on the position of singular parts. A robust clustering and graph-based aggregation yield a coherent pipe model before it is linked with piping and instrumentation diagram to the digital twin. Due to the high complexity and variance of the tasks, some individual tasks must run fully manually or with partial human assistance. Finally, we present how the singular steps are automated and orchestrated by using a workflow automation platform. Our workflow promises good results on pipe models with varying complexity and density both in synthetic and real cases.
Book
This two-volume set constitutes selected papers presented during the First First International Conference on Science, Engineering Management and Information Technology, SEMIT 2022, held virtually in Ankara, Turkey, in February 2–3, 2022 and in September 8-9, 2022. The 37 papers presented were carefully reviewed and selected from the 261 qualified submissions. The papers are organized in the following topical sections: application of computer science and technology in operations and supply chain management; advances of engineering technology and Artificial Intelligence in application management; technology-aided decision-making: systems, applications, and modern solutions.
Chapter
Full-text available
Visual object tracking has become a very active research area in recent years. Each year, a growing number of tracking algorithms are proposed. Object detection and tracking is a critical and challenging task in many critical computer vision applications, including automated video surveillance, traffic monitoring, autonomous robot navigation, and intelligent environments. Object tracking is segmenting an object of interest and tracking its velocity, orientation, and occlusion in a video scene to extract useful information. Over the last two decades, several object tracking approaches have been developed to design a robust object tracker that covers all practical obstacles in real-world operations. This paper reviews recent trends and advances in tracking and assesses the reliability of various trackers based on feature extraction techniques. In video processing, visual tracking has a wide range of applications. When a target is identified in one video frame, it is frequently advantageous to track that object in subsequent frames. Every successful frame in which the target is tracked yields more information about the target’s identity and activity. Because tracking is more straightforward than detection, tracking algorithms can require fewer computational resources than object detectors.KeywordsVisual object tracking (VOT)Object RepresentationObject recognitionObject ClassificationObject tracking
Chapter
Recent advances in disaster robotics have improved perceptual intelligence and maneuverability of unmanned ground (UGV) and aerial (UAV) vehicles for search and rescue (SAR) missions. Nevertheless, effective deployment of disaster robots in complex SAR scenarios poses additional mission-level challenges regarding operation and monitoring. These challenges include building maps for generating safe paths for the robots, designing reliable communication schemes, and integrating the robotic platforms and human responders through a mission control center. In this paper, we address these problems by applying a mission-oriented Internet of robotic things (IoRT) architecture that integrates robotic and human agents through ROS-based communications. This architecture has been validated in an urban SAR scenario from a remote (440 km) mission control center through a commercial mobile network.
Article
Smart cities evolved to include citizens as co-creators, while Industry 4.0 envisioned personalized supply chain models arranged according to consumers’ wishes. Both concepts strove to focus on citizens, impacting transport and manufacturing processes, enhancing social development and promoting sustainability. However, it lacks a clear understanding of their influence on each other and related connection points in the literature. This article conducts a rigorous systematic literature review to make an in-depth analysis of the relationship between smart cities and Industry 4.0. Quantitative and qualitative analyses are performed. The connection points found are technology, process, people and planning. Their relationship is almost unanimous. Smart cities are influenced by Industry 4.0. The evidence of smart city influence on Industry 4.0 does not exist separately from that of Industry 4.0 on smart cities. Although several authors smoothly refer to the influence that smart cities may have in the Industry, it lacks a greater understanding. Furthermore, this study develops two lines of discussion based on the findings and advocates the future need to reflect on how the evolution of smart city concept will impact the development of the industry.
Chapter
Smart cities and villages have enhanced the quality of lives of residents. Various computer-assisted technologies have been harnessed for the development of smart cities and villages in order to provide solutions for common and niche urban problems. The development of smart environments has been possible due to advances in computing power and artificial intelligence (AI) that have allowed the deployment of scalable technologies. Artificial Intelligence for Smart Cities and Villages: Advanced Technologies, Development, and Challenges summarizes the role of AI in planning and designing smart solutions for urban and rural environments. This book is divided into three sections to impart a better understanding of the topics to readers. These sections are: 1) Demystifying smart cities and villages: A traditional perspective, 2) Smart innovations for rural lifestyle management solutions, and 3) Case studies. Through this book, readers will be able to understand various advanced technologies that are vital to the development of smart cities and villages. The book presents 15 chapters that present effective solutions to urban and rural challenges. Concepts highlighted in chapters include smart farms, indoor object classification systems, smart transportation, blockchains for medical information, humanoid robots for rural education, IoT devices for farming, and much more. This book is intended for undergraduate and graduate engineering students across all disciplines, security providers in the IT and related fields, and trainees working for infrastructure management companies. Researchers and consultants at all levels working in the areas of artificial intelligence, machine learning, IoT, blockchain, network security, and cloud computing will also find the contents beneficial for planning projects involving smart environments.
Chapter
Nowadays, the smart city represents the future trends in urban areas, it allows the monitoring and control of all the parameters needed to manage the city, including the environment, life, economy, people, governance and mobility. This concept will raise awareness among citizens and improve the quality of life through better use of public resources. Nevertheless, the rapid growth of smart city and Internet of Things applications is associated with many challenges requiring massive research efforts.In this work, we focus on designing and developing a smart city architecture based on the centralization of smart city services, self-control and adaptation, and future system behavior prediction. Indeed, our proposal integrates a central management application to visualize the different events of the various smart applications and even the dependencies between them. This will allow monitoring and controlling all aspects of smart city applications and system health. In addition, this central application helps gain users’ trust by explaining the different decisions and actions taken. Moreover, our architecture supports not only the generation of intelligent decisions in real time, but also short, medium- and long-term decisions based on prediction mechanism.KeywordsSmart city architecturePredictionSelf-controlCentral application
Article
Full-text available
Traffic sign detection and recognition are crucial in the development of intelligent vehicles. An improved traffic sign detection and recognition algorithm for intelligent vehicles is proposed to address problems such as how easily affected traditional traffic sign detection is by the environment, and poor real-time performance of deep learning-based methodologies for traffic sign recognition. Firstly, the HSV color space is used for spatial threshold segmentation, and traffic signs are effectively detected based on the shape features. Secondly, the model is considerably improved on the basis of the classical LeNet-5 convolutional neural network model by using Gabor kernel as the initial convolutional kernel, adding the batch normalization processing after the pooling layer and selecting Adam method as the optimizer algorithm. Finally, the traffic sign classification and recognition experiments are conducted based on the German Traffic Sign Recognition Benchmark. The favorable prediction and accurate recognition of traffic signs are achieved through the continuous training and testing of the network model. Experimental results show that the accurate recognition rate of traffic signs reaches 99.75%, and the average processing time per frame is 5.4 ms. Compared with other algorithms, the proposed algorithm has remarkable accuracy and real-time performance, strong generalization ability and high training efficiency. The accurate recognition rate and average processing time are markedly improved. This improvement is of considerable importance to reduce the accident rate and enhance the road traffic safety situation, providing a strong technical guarantee for the steady development of intelligent vehicle driving assistance.
Article
Full-text available
En las últimas tres décadas, se ha realizado un gran número de propuestas sobre la optimización del despliegue y planificación de sistemas de tiempo real distribuidos bajo diferentes enfoques algorítmicos que aportan soluciones aceptables a este problema catalogado como NP-difícil. En la actualidad, la mayor parte de los sistemas utilizados en el sector industrial son sistemas de criticidad mixta en los que se puede usar la planificación cíclica, las prioridades fijas y el particionado, que proporciona aislamiento temporal y espacial a las aplicaciones. Así, en este artículo se realiza una revisión de los trabajos publicados sobre este tema y se presenta un análisis de las diferentes soluciones aportadas para sistemas de tiempo real distribuidos basados en las políticas de planificación que se están usando en la práctica. Como resultado de la comparación, se presenta una tabla a modo de guía en la que se relacionan los trabajos revisados y se caracterizan sus soluciones.
Article
Full-text available
p>El uso de middleware de distribución facilita la programación de sistemas distribuidos de tiempo real heterogéneos, y por extensión también puede facilitar la generación automática de código como parte de una estrategia de desarrollo basada en modelos. Sin embargo, esta clase de middleware presenta una complejidad añadida que dificulta su uso en sistemas con ciertos requisitos de criticidad o de tiempo real. En este trabajo se hace una revisión de algunos estudios previos en los que se muestra la posibilidad de utilizar un middleware de distribución centrado en los datos (DDS, Data Distribution Service) para la integración de aplicaciones con criticidad mixta en sistemas distribuidos.</p
Article
Sensor networks, as a promising network paradigm, have been widely applied in a great deal of critical real-world applications. A key challenge in sensor networks is how to improve and optimize coverage quality which is a fundamental metric to characterize how well a point or a region or a barrier can be sensed by the geographically deployed heterogeneous sensors. Because of the resource-limited, battery-powered and type-diverse features of the sensors, maintaining and optimizing coverage quality includes a significant amount of challenges in heterogeneous sensor networks. Many researchers from both academic and industrial communities have performed numerous significant works on coverage optimization problem in the past decades. Some of them also have surveyed the current models, theories and solutions on the problem of coverage optimization. However, most of the existing surveys and analytical studies ignore how to exploit data fusion and cooperation of the deployed sensors to enhance coverage performance. In this paper, we provide an insightful and comprehensive summarization and classification on the data fusion based coverage optimization problem and techniques. Aiming at overcoming the shortcomings existed in current solutions, we also discuss the future issues and challenges in this area and sketch a general research framework in the context of reinforcement learning.
Article
Recent technological advances led to the rapid and uncontrolled proliferation of intelligent surveillance systems (ISSs), serving to supervise urban areas. Driven by pressing public safety and security requirements, modern cities are being transformed into tangled cyber‐physical environments, consisting of numerous heterogeneous ISSs under different administrative domains with low or no capabilities for reuse and interaction. This isolated pattern renders itself unsustainable in city‐wide scenarios that typically require to aggregate, manage, and process multiple video streams continuously generated by distributed ISS sources. A coordinated approach is therefore required to enable an interoperable ISS for metropolitan areas, facilitating technological sustainability to prevent network bandwidth saturation. To meet these requirements, this paper combines several approaches and technologies, namely the Internet of Things, cloud computing, edge computing and big data, into a common framework to enable a unified approach to implementing an ISS at an urban scale, thus paving the way for the metropolitan intelligent surveillance system (MISS). The proposed solution aims to push data management and processing tasks as close to data sources as possible, thus increasing performance and security levels that are usually critical to surveillance systems. To demonstrate the feasibility and the effectiveness of this approach, the paper presents a case study based on a distributed ISS scenario in a crowded urban area, implemented on clustered edge devices that are able to off‐load tasks in a “horizontal” manner in the context of the developed MISS framework. As demonstrated by the initial experiments, the MISS prototype is able to obtain face recognition results 8 times faster compared with the traditional off‐loading pattern, where processing tasks are pushed “vertically” to the cloud.
Article
The development of pervasive communication devices and the emergence of the Internet of Things (IoT) have acted as an essential part in the feasibility of smart city initiatives. Wireless sensor network (WSN) as a key enabling technology in IoT offers the potential for cities to get smarter. WSNs gained tremendous attention during the recent years because of their rising number of applications that enables remote monitoring and tracking in smart cities. One of the most exciting applications of WSNs in smart cities is detection, monitoring, and tracking which is referred to as object tracking sensor networks (OTSN). The adaptation of OTSN into urban cities brought new exciting challenges for reaching the goal of future smart cities. Such challenges focus primarily on problems related to active monitoring and tracking in smart cities. In this paper, we present the essential characteristics of OTSN, monitoring and tracking application used with the content of smart city. Moreover, we discussed the taxonomy of OTSN along with analysis and comparison. Furthermore, research challenges are investigated concerning energy reservation, object detection, object speed, accuracy in tracking, sensor node collaboration, data aggregation and object recovery position estimation. This review can serve as a benchmark for researchers for future development of smart cities in the context of OTSN. Lastly, we provide future research direction.
Chapter
The main postulate of the Internet of things (IoT) is that everything can be connected to the Internet, at anytime, anywhere. This means a plethora of objects (e.g. smart cameras, wearables, environmental sensors, home appliances, and vehicles) are ‘connected’ and generating massive amounts of data. The collection, integration, processing and analytics of these data enable the realisation of smart cities, infrastructures and services for enhancing the quality of life of humans. Nowadays, existing IoT architectures are highly centralised and heavily rely on transferring data processing, analytics, and decision-making processes to cloud solutions. This approach of managing and processing data at the cloud may lead to inefficiencies in terms of latency, network traffic management, computational processing, and power consumption. Furthermore, in many applications, such as health monitoring and emergency response services, which require low latency, delay caused by transferring data to the cloud and then back to the application can seriously impact their performances. The idea of allowing data processing closer to where data is generated, with techniques such as data fusion, trending of data, and some decision making, can help reduce the amount of data sent to the cloud, reducing network traffic, bandwidth and energy consumption. Also, a more agile response, closer to real-time, will be achieved, which is necessary in applications such as smart health, security and traffic control for smart cities. Therefore, this chapter presents a review of the more developed paradigms aimed to bring computational, storage and control capabilities closer to where data is generated in the IoT: fog and edge computing, contrasted with the cloud computing paradigm. Also an overview of some practical use cases is presented to exemplify each of these paradigms and their main differences.
Article
Fog computing is an architecture that extends the traditionally centralized functions of cloud computing to the edge and into close proximity to the things in an Internet of Things network. Fog computing brings many advantages, including enhanced performance, better efficiency, network bandwidth savings, improved security, and resiliency. This article discusses some of the more important architectural requirements for critical Internet of Things networks in the context of exemplary use cases, and how fog computing techniques can help fulfill them.
Conference Paper
In the current context of distributed systems, the communications have moved from a model based on connected nodes, to a model that must connect processes, or including connect data. There are many paradigms, mechanisms and architectures to give support to these distributed systems. Most of these mechanisms are extensions or evolutions of distributed node based architecture where the knowledge about the physical node is necessary. To isolate communications from the physical aspects of distributed systems a middleware, centered on the data, and oriented to connect processes has been developed. The middleware, called “CKMultipeer”, allows processes to communicate data between them. Communication is integrated without worrying about the network configuration and management. This article describes the implementation of the middleware, as well as the tests developed. Through the design of a graphical interface, the communications performance is analyzed.