Conference PaperPDF Available

Processing Real-time Sensor Data Streams for 3D Web Visualization


Abstract and Figures

Today, myriads of sensors are surrounding us. Their usage ranges from environmental monitoring (e.g., weather and air quality), over sensor-equipped smart buildings, to the quantified self and other human observing applications. The data streams produced by such sensors often update with high frequencies, resulting in large data volumes. Be-ing able to analyze those real-time sensor data streams re-quires efficient visualization techniques. In our work, we explore how 3D visualizations can be used to extend the available information space. More specifically, we present an approach for processing real-time sensor data streams to enable scalable Web-based 3D visualizations. Based on an event-driven architecture, our key contribution is the presentation of three processing patterns to optimize transmission of sensor data streams to 3D Web clients.
Content may be subject to copyright.
Processing Real-time Sensor Data Streams for 3D Web
Arne Bröring
Esri Suisse
Josefsstr. 218
Zürich, Switzerland
David Vial
Esri Suisse
Josefsstr. 218
Zürich, Switzerland
Thorsten Reitz
Esri R&D Center Zurich
Technoparkstr. 1
Zürich, Switzerland
Today, myriads of sensors are surrounding us. Their usage
ranges from environmental monitoring (e.g., weather and
air quality), over sensor-equipped smart buildings, to the
quantified self and other human observing applications.
The data streams produced by such sensors often update
with high frequencies, resulting in large data volumes. Be-
ing able to analyze those real-time sensor data streams re-
quires efficient visualization techniques. In our work, we
explore how 3D visualizations can be used to extend the
available information space. More specifically, we present
an approach for processing real-time sensor data streams
to enable scalable Web-based 3D visualizations. Based
on an event-driven architecture, our key contribution is
the presentation of three processing patterns to optimize
transmission of sensor data streams to 3D Web clients.
More and more sensors monitor our environment. Sen-
sors are attached to smart ’things’, ranging from refriger-
ators, over cars to entire buildings. By leveraging standard
Web Technology, these sensors are highly accessible, form-
ing the foundation for the Web of Things [13]. Examples
of sensor applications are the Air Quality Sensor Web [3]
of the European Environment Agency [15], sensor-based
tracking of wildfires [20], the crowd-sourced collection of
automobile sensor data [16], or human observing applica-
tions, such as a self-monitoring smartphone app for ath-
letes such as STRAVA1. The sensor data streams generated
by sensors can have very high data rates and volumes. As
with other types of big data, visual analytics enable users
to acquire information and knowledge from them [17]. In
this work, we look at Web-based 3D visualizations for such
sensor data streams. Adding the third dimension extends
the available information space and allows to display more
data dimensions. Delivering such visualizations over the
Web through a browser-based client improves accessibil-
However, Web-based 3D visualization of rapidly changing
data is challenging. Fast 3D rendering requires significant
processing power and preprocessed, optimized data struc-
tures such as Binary Space Partitioning [6]. The rendering
is performed via the WebGL API of the browser, which is
limiting compared to desktop APIs. The efficient transmis-
sion of event data – large amounts of data are pushed by
sensors to Web Services and to the client – adds a third
challenge, which we address in this paper.
Most 3D engines, browser-based or otherwise, render each
frame using strict time budgets for steps such as prepro-
cessing the data, uploading data to the GPU, or performing
garbage collection of unneeded data. These time budgets
ensure fluent and reactive graphics. If data changes often,
such as when the client receives high frequency variable
data rate streams in a push communication style, the 3D
engine cannot adhere to the defined budgets.
Fortunately, sensor data streams have properties that al-
low us to optimize content sent for rendering. The first
category of sensors update data for a particular feature
that is of interest [5], such as a temperature sensor on
a weather station. A second category of sensors creates
event features first, which might be updated later. In the
quantified self app STRAVA, the features of interest are
the tracks created by athletes, while in wildfire tracking
the features of interest are wildfires, which have dynamic
extents. Both types of sensors produce features that can
have a multitude of characteristics. Changes over time
on those features affect one or multiple characteristics,
while other characteristics such as the geometry remain
Directly transmitting sensor data streams, via push com-
munication to a browser-based 3D visualization client re-
sults in performance and scalability challenges. Sensor
data streams can update and create very large numbers
of features, such as tracks created by the over 32 Million
STRAVA users2. Hence, the goal of this work is to develop
mechanisms for processing sensor data streams to enable
efficient data transmission to 3D visualization clients.
To achieve these goals we have designed a generic event-
driven architecture that processes incoming sensor data
streams and forwards these as spatio-temporal features to
registered clients. We implement this generic architecture
design by utilizing ArcGIS Server and the contained Geo-
Event Processor (GEP) (Section 2.1) as its center piece.
As an example for a widely used sensor data protocol for
the Internet of Things [11], we use the Message Queue
Telemetry Transport (MQTT) protocol (Section 2.1) to stream
sensor data to the GEP. The GEP receives, processes, and
transforms the MQTT data into the Indexed 3D Scene (i3s)
format (Section 2.2). The i3s data is then streamed via
WebSocket to the 3D visualization client. Based on this ar-
chitecture, we designed, implemented and evaluated three
generic processing patterns for the efficient communica-
tion of real-time sensor stream data to 3D clients:
1. Aggregation of data over a time interval before send-
ing it to the client.
2. Sending only the latest representation of a feature to
the client, which was received last within a specified
time frame.
3. Sending only the differences in changed characteris-
tics of features to the client.
The remainder of this paper is structured as follows. Sec-
tion 2 describes the background and related research of
this work. Section 3 first presents the architecture design
as a basis for the three processing patterns described in
the second part. Section 4 evaluates the developed pat-
terns based on example data sets. Section 5 concludes
this work and provides an outlook to future research.
This section describes the context of this research. First,
the area of Web-based event processing is introduced with
a focus on existing systems and standards (Section 2.1).
Second, requirements for 3D Web visualization are listed
and the new Indexed 3D Scene format realizing those re-
quirements is introduced (Section 2.2).
2.1 Web-based Data Stream Processing
Systems and data exchange formats that allow an efficient
processing of real-time sensor data streams have been re-
searched and developed since many years. The Message
Queue Telemetry Transport (MQTT) [2] standard is an ex-
ample for a light-weight messaging protocol that is specif-
ically designed for limited processing power and network
bandwidth. MQTT is today widely-used in Internet of Things
applications. Through an MQTT broker, message produc-
ers can publish messages to specified topics where they
are received by subscribed consumers. This publish/sub-
scribe communication pattern is realized on top of the TCP/IP
stack. Hence, MQTT is lower-level and not integrated with
Web standards.
Other than MQTT, the Sensor Web framework developed
by the Open Geospatial Consortium (OGC) defines the Sen-
sor Event Service (SES) [7] for processing of sensor data
streams based on Web standards. The SES specification
reuses standards to handle event notifications (WS-Base-
Notification, WS-BrokeredNotification) [12, 4] as well as to
operate event channels (WS-Topics) [21]. SES clients are
enabled to subscribe for certain information, and can get
automatically notified about events. The SES supports ad-
vanced event filtering and realizes Complex Event Process-
ing [18]. Syntactical, logical, spatial, temporal, arithmetic,
and comparison filters can be defined using the Filter En-
coding standard [14]. Combinations of event streams can
be specified using the Event Pattern Markup Language
(EML) [9]. For example, these filters allow to receive fire
warning notifications in case the average temperature over
the past 2 minutes increases 60Celsius and smoke is de-
The implementation of our work is based on the GeoEvent
Processor (GEP) [19]. While GEP does not follow out-of-
the-box the above described OGC standards, it provides
similar concepts and functionalities. Data stream inputs
and outputs can be defined and combined in so-called geo-
event services. A geoevent service is created by the GEP
administrator and connects multiple inputs with multiple
outputs. In-between, data filtering and processing steps
may be defined. Incoming data messages are translated
to so-called geoevents, whose structure is defined by the
administrator. New in- and outputs can be developed by
following an open API to extend the functionality of the
GEP. This way, the various Web standards and best prac-
tices (e.g., JSON, REST, SOAP, or Email) can be utilized
to implement complex event processing applications. For
example, messages from an RSS data stream can be com-
bined with micro-blog messages from Twitter, they can be
filtered for certain keywords, and can finally be streamed
to a WebSocket [10] or stored in another Web service.
The GEP is provided as an extension to the ArcGIS Server
[1], a functionality-rich server for building Web-based Ge-
ographic Information Systems (GIS).
2.2 3D Web Visualization with Indexed 3D-
The Indexed 3d Scene (i3s) is a visualization format for
3D geospatial data that can be streamed via a REST API
and consumed by Desktop, Web and Mobile clients. It is
currently in development, with 10 encoders and 3 clients
being implemented, and has been designed to fulfill this
set of requirements:
1. Scalability: Support very large scenes, with global
extent and a very large number of features (up to 1
billion), as well as very heavy features
2. Reusability: Be useable both as the delivery format
of the ArcGIS Scene Service and as a format stored
in a local file or database
3. Level of Detail: Support Level of Detail concepts
for generalization of very large/heavy features and
for “semantic” Level of Detail approaches
4. User-controllable symbology: Support client-side
symbology rendering
5. Extensibility: Be extensible to support new features
(e.g. geometry types) and new platforms (e.g. by
allowing definition of different materials/shaders)
6. Web Friendliness: Easy to handle and parse by Web
clients by using JSON and current Web standards
7. Declarative: Limit how much specific knowledge on
the client-side is needed for format support (e.g. In-
dex generation method only needs to be known while
writing the format)
8. Follow REST/JSON API best practices: “Hyper-
text as the Engine of Application State” –make all
resources navigable using hrefs from relevant other
i3s is a partitioned 3D Scene format with some similarities
to regionated KML3. In an indexed 3D scene, the spatial
extent is split into regions with a roughly equal amount
of data in them, and an access data structure – the actual
index – allows the client to quickly discover which data it
actually needs. Such a region of a 3D scene is called a
node. Each node has different components – node index
documents (NIDs), feature data, textures, geometry and
resources shared across features of a given node. The i3s
format also has an overlay mode where only differences to
a static base data set need to be transmitted.
This section first presents the design of an event-driven
architecture for processing real-time sensor data streams
to 3D Web clients (Section 3.1). We then present three
generic processing patterns supported by this architecture
to optimize the data transmission efficiency for different
use cases of sensor data streams and 3D Web visualization
(Section 3.2).
3.1 An Architecture for Processing Sensor Data
Streams for 3D Web Visualization
Figure 1 shows the abstract architecture design pattern
we have developed. In the center of the architecture is a
Geospatial Data Server which offers two Web service in-
terfaces: (1) a 3D Scene Download Service and (2) a Fea-
ture Push Service. The 3D visualization client consumes
3D feature data from the Geospatial Data Server in two
ways. First, it pulls static feature data from the 3D Scene
Download Service for an initial rendering of a 3D scene.
Second, the client registers at the Feature Push Service
through which feature representations, generated from in-
coming sensor data, are forwarded to the client. In order
to receive sensor data, the Geospatial Data Server regis-
ters at a Sensor Data Broker. Sensors publish their data to
a broker which forwards the data on-the-fly to the server.
Figure 1: Abstract design of the real-time sensor
data processing architecture for Web-based 3D visu-
Figure 2 shows our realization of the abstract architec-
ture pattern described above. The role of the Geospatial
Data Server is filled by ArcGIS Server [1]which offers dif-
ferent service interfaces, while here the 3D Scene Service
and the GeoEvent Processor are utilized to implement the
3D Scene Download Service and the Feature Push Service.
The Web Scene Viewer, as an example of a 3D client, can
connect to these two services and consume their data. We
implement the role of the Sensor Data Broker using an
MQTT broker, which is in our case the IBM Intelligent Op-
erations Center. To enable the registration of the ArcGIS
Server for certain sensor data, the MQTT broker offers dif-
ferent topics. During the application planning, a common
topic is specified between sensor and server. This way, the
sensor data is forwarded to the GeoEvent Processor once
the sensor data stream reaches the MQTTT broker.
Within the GeoEvent Processor, the sensor data is now
processed, transformed to the i3s format, and finally for-
warded to the WebSocket where the client is listening for
data. In order to receive MQTT data, to apply the different
processing steps, and to transform the data to i3s, we have
developed separate components for the GeoEvent Proces-
sor. Figure 2 shows details of this work.
To enable the registration at MQTT brokers, we developed
an MQTT Inbound Transport which is able to listen for
an MQTT topic and receive incoming data. Together with
the default GEP Text Adatper it forms the MQTT-Input. A
graphical configuration of the MQTT-Input, as shown in
Figure 3, allows to define the details of the MQTT end-
point, such as the host address and port. Also, the struc-
ture of the incoming MQTT data is specified here. In the
example setup of Figure 3, the expected MQTT message
Figure 2: Realization of the architecture (within GeoEvent Processor: blue components are newly developed
and green components are provided by the GEP.
payload follows a CSV structuring with the ’;’ as attribute
separator and line break as message separator. The num-
ber and type of attribute elements within the MQTT pay-
load are defined in the MQTT-Event, which is referenced
in this MQTT-Input configuration. Listing 1 shows four ex-
ample MQTT messages as part of a sensor data stream. In
this example, the messages contain four fields: feature ID,
time, attribute value, and geometry. The geometry is en-
coded in the JSON format as defined in the ArcGIS REST
API [8]. The geometry can be more complex (e.g., polygon)
and also the number of contained attributes is not limited.
Once an MQTT message is received by the MQTT-Input,
it transforms the message to an instantiation of an MQTT-
Event. Through the setup of a GeoEvent service, as shown
in Figure 3, the MQTT-Input is connected with an i3s-Output
on the right side of Figure 2. The GeoEvent service en-
sures that instantiated MQTT-Events are handed over to
the i3s-Output.
Listing 1: Example stream of sensor messages using
CSV structuring as MQTT payload.
2f1; 8:03:11; 22.34; {"paths":[[1,4,0.3],[ 2,2,0.2]]}
3f2; 8:03:12; 10.63; {"paths":[[9,6,0.1],[ 8,5,0.1]]}
4f1; 8:04:02; 22.34; {"paths":[[2,5,0.5],[ 3,3,0.4]]}
5f2; 8:04:04; 11.89; {"paths":[[8,5,0.1],[ 8,4,0.2]]}
Listing 2: Example i3s feature encoding.
10 ]},
11 "type":"lines"
12 },
13 "type":"Embedded"
14 }],
15 "layer":"layer",
16 "mbb":[
17 1,4,0.3,
18 2,2,0.2
19 ],
20 "attributes":[
21 {"metadata":[{
22 "name":"feature_id",
23 "value":"f1"
24 }]},{
25 "name":"time",
26 "value":"8:03:11"
27 },{
28 "name":"intensity",
29 "value":"22.34"
30 }]
31 }]}
The i3s-Output consists of the newly developed i3s Out-
bound Adapter in combination with the GEP WebSocket
Outbound Transport. This component is able to transform
MQTT-Events into i3s feature data (Section 2.2). Listing 2
shows the translation of the first MQTT message displayed
in Listing 1 to an i3s feature.
Beyond the translation into the i3s format, the configura-
tion of the i3s-Output allows to apply the three developed
patterns for optimizing the communication efficiency. As
shown in the screenshot of the GeoEvent Processor man-
ager (Figure 3), these three patterns can be applied by
selecting the following options: (1) ’update interval’, (2)
’send latest only’, and (3) ’send diff only’. These options
relate to the three patterns which are in detail described
in Section 3.2.
3.2 Patterns for Optimizing the Communica-
tion Efficiency
The three generic patterns for sensor data stream process-
ing discussed here are designed for specific use cases of
sensor data streams. They aim at making the transmission
of such data more efficient for 3D visualization.
Without applying any processing, the incoming real-time
sensor data streams are directly transmitted by the GEP to
the client (Figure 4). No further processing is performed,
except the translation from MQTT messages to the i3s data
format as the data passes through the developed GEP com-
ponents (Figure 2). Each incoming MQTT data message is
translated into an i3s feature. The client receives those
Figure 3: Screenshot of the GeoEvent Processor manager to configure the GeoEvent Service.
i3s features irregularly. This can cause inefficient cases of
data transmission, for example, when the values of a large
number of subsequent MQTT messages remain mostly con-
Figure 4: Direct data transmission.
3.2.1 Aggregation over Time Interval
In the first pattern, the server aggregates the received
data messages over a specified time interval before send-
ing them to the client. Therefore, the server stores, during
the defined update interval, all incoming messages as sep-
arate feature representations. After the defined time in-
terval, the server transmits all feature representations as
a single aggregate to the client (see Figure 5).
This pattern is particularly efficient in use cases where the
features of interest are experiencing changes on multiple
characteristics. The client does not want to receive those
changes directly when they occur, since this might disturb
the client-side computing processes. Instead, a defined
maximum update interval (e.g., every 5 seconds) is desired
in which collected features are sent as part of an aggre-
To realize this pattern on the server-side, the computation
time is O(1), thus, the required computing resources are
low. The algorithm runs in O(n) space, with n being the
number of incoming messages per defined time interval.
Figure 5: Pattern: Aggregation over Time Interval
3.2.2 Sending Latest Feature Only
In the second pattern, the server sends only the latest
representation of a feature to the client, which was re-
ceived last within a specified time frame. Therefore, the
GeoEvent processor maintains a temporary feature store.
During the defined time frame, the last representation of a
feature that comes in as a data message is stored. In case
a new version of this feature is received, the latest respre-
sentation is updated. After the defined time interval, the
server transmits the latest representation of a feature to
the client (see Figure 6) and the server erases the tempo-
rary feature store.
This pattern is particularly efficient when very high num-
bers of messages are coming in to the server. The client
does not want to receive and react to all those features.
Instead, the client is only interested in the latest repre-
sentation of a feature for a defined timeperiod (e.g., the
client needs the latest feature representation every 5 sec-
onds). This pattern can be combined with the aggregation
pattern (Section 3.2.1), so that all latest feature represen-
tations are sent to the client in one aggregate.
To realize this pattern on the server-side, the computation
time is O(1), thus, the required computing resources are
low. The algorithm runs in O(n) space, with n being the
number of latest feature representations received as in-
coming messages during the defined time frame. However,
O(n) is only needed during the defined time interval, and
afterwards the memory is erased.
Figure 6: Pattern: Sending Latest Feature Only
3.2.3 Sending Feature Differences Only
In the third pattern, the server sends only differences in
characteristics (i.e. attributes or geometries) of features to
the client. For example, feature f with attribute values A,
B, C is received at t1 by the server and directly forwarded
to the client as: f(A,B,C). At t2, a message describing fea-
ture f with attribute values A’, B’, and C is received. Apply-
ing this pattern, the feature f is forwarded to the client as
f(A’,B’), with attribute C not being part of the feature rep-
resentation, as it has not been changed. To implement this
pattern, the server stores an up-to-date version of each fea-
ture containing values for all its attributes and geometries.
In case a new message regarding this feature is coming in,
the server determines the differences to the stored feature
version. Then, the server builds a feature representation
that contains only the changed attribute and geometry val-
ues and transmits that version of the feature to the client.
Sending only parts of a feature can be done by taking ad-
vantage of the flexibility of the i3s format which allows to
omit fields when the default value can be used or when no
changes occurred on these fields.
This pattern is particularly efficient when the features of
interest are experiencing a lot of changes on single char-
acteristics (e.g.: change in geometry or change in a sin-
gle attribute), while the other characteristics remain con-
stant. In such cases, it is more efficient to transmit only
the change of a characteristic to the client instead of the
entire feature representation, especially, when the feature
representation is large and consists of several and/or com-
plex attributes.
To realize this pattern on the server-side, the computation
time is O(n), with n being the number of characteristics.
The algorithm runs in O(n) space, with n being the over-
all number of incoming features. This amount of required
memory space can be critically high in scenarios with very
large numbers of features of interest. This disadvantage
can be omitted by combining this pattern with the "send-
ing latest feature only" pattern, as the required space is
then erased after the specified update interval (see Sec-
tion 3.2.2) and O(n) space is only needed for the time of
the update interval.
To evaluate our approach we developed a testing client
(Figure 8), which allows us to send CSV structured MQTT
Figure 7: Pattern: Sending Feature Differences Only
messages to the broker (see left panel). The MQTT bro-
ker forwards the messages to the GEP, which pushes the
resulting i3s features back to the client. The returned fea-
tures are displayed on the right panel. The client allows
to define the time (in milliseconds) between the sending of
two CSV rows as separate MQTT messages. This way, the
client builds the basis for creating different scenarios to
evaluate the three patterns.
In addition, we created a Java program that is able to auto-
matically generate random CSV structured data sets for a
specified number of features and a number of attribute up-
date cycles. For each feature and per update cycle, a CSV
row is created with 1 column for the feature ID, 9 attribute
columns, and 1 geometry column. Further the program al-
lows to specify a likelihood with which each attribute as
well as the geometry of a feature changes in an update cy-
cle. For our evaluation, we created four data sets, each for
10 features and 100 update cycles, with 20%, 40%, 60%,
and 80% likelihood of change in the feature characteris-
For the first evaluations, we use the data set with 20%
change likelihood. Stored as a CSV file this data set has
a size of 275.108 bytes. Sending this data set through the
test client to the MQTT broker and receiving it from the
GEP in the i3s data format without applying any of the de-
veloped patterns results in 1.282.878 bytes of data, which
are transferred to the client via WebSocket in 1.000 mes-
sage pushes (1 push per row; 10 features x 100 update
As a first evaluation, we applied the ’Aggregation over
Time’ pattern (Section 3.2.1) with different update inter-
vals used by the GEP (see Figure 3). Table 1 shows the
results of this evaluation done for update intervals of 1
and 2 seconds. It shows that the number of data pushes
is significantly reduced by this pattern compared to the
1.000 pushes without the pattern. This behaviour of regu-
lar pushes of aggregated data is beneficial for a client ap-
plication, as the receiving and processing of the data can
be more efficiently handled in the available time budgets
of the rendering engine. However, this pattern increases
the size of transmitted data by around 10%. This is due to
the additional textual statements in the i3s output required
to express the aggregation of the data.
Table 2 shows the evaluation of the ’Sending Latest Fea-
4The four data sets can be downloaded here: http://
Figure 8: Basic test client (left: sending messages to MQTT server; right: receiving i3s features from GEP)
Update Interval # Pushes Size
1 s 48 1.411.955 bytes
2 s 24 1.408.901 bytes
Table 1: Evaluation of ’Aggregation over Time’.
ture Only’ pattern (Section 3.2.2). We ran this evaluation
with the data set for 10 features and 100 update cycles
with 20% change likelihood. For each evaluation run, we
used a different update interval on the server-side: 1, 2, 3,
4, and 5 seconds. Between this update interval, the server
stores the latest version of each incoming feature and this
latest version is then forwarded to the client at the end of
the update interval. The result of this pattern is that both,
the number of data pushes as well as the size of the sent
data, are significantly reduced. In case of the 1 second
update interval 11 pushes are used, which is only 1 % of
the number of pushes used in the direct transmission, and
also the sent data is significantly lower with only 12% of
the size compared to applying no pattern.
Update Interval # Pushes Size
1 s 11 157.234 bytes
2 s 6 85.926 bytes
3 s 4 57.289 bytes
4 s 4 43.001 bytes
5 s 3 42.985 bytes
Table 2: Evaluation of ’Sending Latest Feature Only’.
Table 3 shows the evaluation of the ’Sending Feature Dif-
ferences Only’ pattern. In the evaluation of this pattern,
we used the 4 different data sets with different change
likelihoods: 20%, 40%, 60%, and 80%. Implementing this
pattern, the server maintains a most recent representation
of each feature and only sends changed feature character-
istics to the client. While the number of pushes remains at
around 1.000, this pattern reduces significantly the size of
transmitted data. How much the size of transmitted data
reduces depends on the likelihood of feature changes as
shown in the evaluation.
Change Likelihood Size
20 % 310.661 bytes
40 % 541.135 bytes
60 % 773.601 bytes
80 % 975.736 bytes
Table 3: Evaluation of ’Sending Feature Differences
The key contribution of this paper is the presentation and
evaluation of three generic processing patterns that can
be applied in handling high frequency sensor data streams
for 3D Web visualization. As the basis for those patterns,
we developed a reference architecture consisting of sen-
sor, sensor data broker, and geospatial data server. The
geospatial data server can be configured to apply the three
processing patterns: (1) the aggregation of stream data
over a specified time interval, (2) the sending of only the
latest representation of features over a defined time pe-
riod, and (3) the sending of only the differences in feature
characteristics. We implemented this generic architecture
and the processing patterns based on the GeoEvent Pro-
cessor and ArcGIS Server.
As our evaluation has shown, applying these patterns opti-
mizes the communication efficiency of sensor data streams
to 3D Web visualization clients. The highest reduction in
number of needed data pushes and size in transmitted data
could be achieved by the second pattern, the ’sending of
latest feature only’. This pattern is particularly helpful in
cases where the client needs to display the features in de-
fined update cycles and the changes happening in-between
of this time frame are not of interest for the client. The
third pattern, the ’sending feature diff only’, also achieves
high reductions in transmitted data size, while the number
of data pushes remains on the same level. This pattern
is particularly useful in scenarios where the client needs
to receive all feature changes. In addition, the presented
patterns can be combined, to achieve benefits from multi-
ple patterns. Continuing this work, we will evaluate such
effects in detail.
In the future, the here achieved results on communication
efficiency can be further improved by combining the pro-
cessing patterns with server-side filtering based on client
subscriptions. E.g., a client could subscribe using a com-
bination of syntactic, logical, spatial, temporal, arithmetic,
and comparison filters as described in the SES and EML
specifications (Section 2.1).
This work is supported by the Environmental Systems Re-
search Institute (Esri) and the Climate KIC funded project
[1] E. Bader. ArcGIS Server Administrator and
Developer Guide. Esri, San Diego, CA, USA, 2005.
[2] A. Banks and R. Gupta. MQTT Version 3.1.1 –
Candidate OASIS Standard 01. OASIS, 2014. Online
mqtt/v3.1.1/mqtt-v3.1.1.html; last accessed
[3] A. Bröring, J. Echterhoff, S. Jirka, I. Simonis,
T. Everding, C. Stasch, S. Liang, and R. Lemmens.
New Generation Sensor Web Enablement. Sensors,
11(3):2652–2699, 2011.
[4] D. Chappell and L. Liu. Web Services Brokered
Notification 1.3 (WS-BrokeredNotification). OASIS,
[5] S. Cox. OGC Abstract Specification Topic 20:
Geographic Information - Observations and
Measurements. Open Geospatial Consortium,
Wayland, MA, USA, 2010.
[6] M. de Berg, O. Cheong, M. van Kreveld, and
M. Overmars. Binary Space Partitions.
Computational Geometry: Algorithms and
Applications, pages 259–281, 2008.
[7] J. Echterhoff and T. Everding. OGC Discussion Paper
08-133: OpenGIS Sensor Event Service Interface
Specification. Open Geospatial Consortium,
Wayland, MA, USA, 2008.
[8] Esri. GeoServices REST Specification, Version 1.0.
Esri White Paper, Redalnds, CA, USA, 2010.
[9] T. Everding and J. Echterhoff. OGC Discussion Paper
08-132 - Event Pattern Markup Language (EML).
Open Geospatial Consortium, Wayland, MA, USA,
[10] I. Fette and A. Melnikov. IETF RFC 6455: The
WebSocket Protocol. IETF, Network Working Group,
2011. Online available:; last
accessed 08/2014.
[11] N. Gershenfeld, R. Krikorian, and D. Cohen. The
Internet of Things. Scientific American,
291(4):76–81, 2004.
[12] S. Graham, D. Hull, and B. Murray. Web Services
Base Notification 1.3 (WS-BaseNotification). OASIS,
[13] D. Guinard, V. Trifa, and E. Wilde. A Resource
Oriented Architecture for the Web of Things. In
Internet of Things (IOT), 2010, pages 1–8. IEEE,
[14] ISO/TC211. ISO/DIS 19143: Geographic information
– Filter Encoding. ISO/TC 211, Geneva, Switzerland,
[15] S. Jirka, A. Bröring, P. Kjeld, J. Maidens, and
A. Wytzisk. A Lightweight Approach for the Sensor
Observation Service to Share Environmental Data
Across Europe. Transactions in GIS, 16(3):293–312,
[16] S. Jirka, A. Remke, and A. Bröring. enviroCar –
Crowd Sourced Traffic and Environment Data for
Sustainable Mobility. In Environmental Information
Systems and Services - Infrastructures and
Platforms 2013 - with Citizens Observatories, Linked
Open Data and SEIS/SDI Best Practices (ENVIP
2013), Neusiedl am See, Austria, 2013.
[17] D. A. Keim, F. Mansmann, J. Schneidewind, and
H. Ziegler. Challenges in Visual Data Analysis. In
10th International Conference on Information
Visualization (IV 2006), pages 9–16. IEEE, 2006.
[18] D. Luckham. The Power of Events. Addison-Wesley,
[19] A. Mollenkopf. ArcGIS GeoEvent Processor – An
Introduction. In Esri International User Conference
2013, San Diego, CA, USA, July 2013. Online
last accessed 08/2014.
[20] D. Moodley, A. Terhorst, I. Simonis, G. McFerren,
and F. van den Bergh. Using the Sensor Web to
Detect and Monitor the Spread of Wild Fires. In 2nd
International Symposium on Geo-Information for
Disaster Management, Goa, India, September 2006.
Online available:; last accessed:
[21] W. Vambenepe, S. Graham, and P. Niblett. Web
Services Topics 1.3 (WS-Topics). OASIS, 2006.
... To provide the real-time requirement for the volume of data to be processed, a distributed application architecture is prepared that receives a large volume of data with low latency and analyzes it with the help of an embedded CEP engine. Stream processing systems such as MQTT or Kafka can be used [26]- [28]. The following figure (see Figure 7) illustrates the concept of a distributed system architecture. ...
Full-text available
Falls can have serious consequences for people, leading to restrictions in mobility or, in the worst case, to traumatic-based cases of death. To provide rapid assistance, a portable fall detection system has been developed that is capable of detecting fall situations and, if necessary, alerting emergency services without any user interaction. The prototype is designed to facilitate reliable fall detection and to classify several fall types and human activities. This solution represents a life-saving service for every person that will significantly improve assistance in the case of fall events, which are a part of daily life. Additionally, this approach facilitates independent system operation, since the system does not depend on sensor or network units located within a building structure. This article also introduces fall analysis. To guarantee functional safety, a hazard analysis method named system-theoretic accident model and processes (STAMP) is applied.
... Good visualization can help users to oversee the data and even to gain new insights from it. Bröring et al. (2014) aim to allow 3D web-based visualization on GeoStreams with Message Queue Telemetry Transport (MQTT) as a message broker for the GeoStreams. ...
Positional data from small and mobile Global Positioning Systems has become ubiquitous and allows for many new applications such as road traffic or vessel monitoring as well as location-based services. To make these applications possible, for which information on location is more important than ever, streaming spatial data needs to be managed, mined, and used intelligently. This article provides an overview of previous work in this evolving research field and discusses different applications as well as common problems and solutions. The conclusion indicates promising directions for future research.
Full-text available
The coconut (Cocos nucifera) is an essential agricultural crop which delivers oil, food, beverage, fiber, medicine with range of raw materials and widely identified as "tree of life". The coconut grows all over the tropics but is fronting major challenges to its survival. It obviously demonstrates that coconut plantation has declined due to poor agricultural practices and farm management. The aim of this paper is to reveal the existing challenges and find opportunities of online 3D visualization of GIS data in coconut plantation management. In this study, the data collected from a field observation will be used as the source of primary data where coconut plantation took place. The results discovered from the coconut field observation about the techniques for coconut based farming that are traditional challenging manual cultivation practices and there is also lack of proper monitoring activities is the main challenges could be highlight for introducing new technologies. The findings from this study possibly will helps in identifying the suitable challenges and opportunities that could be highlight in implementing online 3D visualization of GIS data for coconut plantation management. It will also be adapted for advancing visualization technologies.
Conference Paper
This paper describes the design and implementation of an interactive web application titled "3DImpact" that accesses World Health Organization (WHO) data, in conjunction with geographic location based data, and using WebGL based 3D graphics visualizes the resulting information. The information is presented in the form of a 3D globe highlighting, through responsive 3D animation, the time based dataset targeted. The application was designed with the intention of being usable as a generic web visualization tool, to take WHO information, across a range of topics and present that to the web user in an interactive 3D form. An experimental trial of 31 participants was carried out and involved each participant using three different forms of web-based presentation of WHO data. These forms included the Control - raw statistical data (numbers and tables), 2DChart --2D map based chart of data, and the 3DImpact interactive 3D globe. The results from the experimental trial indicated that the 3DImpact interactive visualization was statistically significantly rated as more engaging and more informative than either of the other systems. Users highlighted the 3D nature of the information (world health data being presented on a globe of the world) as a significant feature but also highlighted the fact that the use of depth in the 3D tool allowed them to more easily see locations where there were significant spikes in impact and this advantage appeared to play a key role in the 3D tools preference over its flat 2D counterparts. These results indicate that the use of 3D web based applications for the visualization of world health data, in differing fields through the reusable nature of the tool, offer the potential to enhance the users interaction with the data.
Conference Paper
Full-text available
This paper introduces the enviroCar project which aims at crowd sourcing the collection of environmental and traffic data. For this purpose a data collection architecture relying on onboard sensors of cars, mobile phones and web services has been developed. Besides providing drivers with precise information about their driving style and its environmental/economical impact, the enviroCar system offers a new and complementary source of data for traffic planning tasks. The resulting data sets are published as anonymised open data and thus serve as a basis for collaboration between citizens, scientists as well as traffic planners in the sense of citizen science.
Full-text available
The importance of near real‐time access to environmental data has increased steadily over the last few years. In this article, the focus is on the European Environment Agency (EEA), which receives environmental data from a large number of providers. The heterogeneous data formats and data transfer mechanisms make the data collection and integration a difficult task for the EEA. An approach is needed for facilitating the interoperable exchange of environmental data on a large scale. A core element of this approach is the Sensor WebEnablement (SWE) technology of the Open Geospatial Consortium (OGC) which allows the standardized, interoperable, vendor and domain independent exchange of sensor data. The main contribution of this article is a lightweight profile for the OGC Sensor Observation Service that ensures the necessary interoperability for seamlessly integrating the environmental data provided by the EEA's member states and thus forms the foundation for the developed data exchange mechanisms. This is complemented by information about the resulting Sensor Web architecture and the integration into the EEA's existing IT infrastructure. In summary, this article describes a practical scenario in which SWE technology enables the exchange of near real‐time environmental data on a large scale.
Conference Paper
Full-text available
Many efforts are centered around creating large-scale networks of “smart things” found in the physical world (e.g., wireless sensor and actuator networks, embedded devices, tagged objects). Rather than exposing real-world data and functionality through proprietary and tightly-coupled systems, we propose to make them an integral part of the Web. As a result, smart things become easier to build upon. Popular Web languages (e.g., HTML, Python, JavaScript, PHP) can be used to easily build applications involving smart things and users can leverage well-known Web mechanisms (e.g., browsing, searching, bookmarking, caching, linking) to interact and share these devices. In this paper, we begin by describing the Web of Things architecture and best-practices based on the RESTful principles that have already contributed to the popular success, scalability, and modularity of the traditional Web. We then discuss several prototypes designed in accordance with these principles to connect environmental sensor nodes and an energy monitoring system to the World Wide Web. We finally show how Web-enabled smart things can be used in lightweight ad-hoc applications called “physical mashups”.
These days pilots no longer have their first flying experience in the air, but on the ground in a flight simulator. This is cheaper for the air company, safer for the pilot, and better for the environment. Only after spending many hours in the simulator are pilots allowed to operate the control stick of a real airplane. Flight simulators must perform many different tasks to make the pilot forget that she is sitting in a simulator. An important task is visualization: pilots must be able to see the landscape above which they are flying, or the runway on which they are landing. This involves both modeling landscapes and rendering the models. To render a scene we must determine for each pixel on the screen the object that is visible at that pixel; this is called hidden surface removal. We must also perform shading calculations, that is, we must compute the intensity of the light that the visible object emits in the direction of the view point. The latter task is very time-consuming if highly realistic images are desired: we must compute how much light reaches the object—either directly from light sources or indirectly via reflections on other objects—and consider the interaction of the light with the surface of the object to see how much of it is reflected in the direction of the view point. In flight simulators rendering must be performed in real-time, so there is no time for accurate shading calculations. Therefore a fast and simple shading technique is employed and hidden surface removal becomes an important factor in the rendering time.