Article

Event-Based Data Dissemination on Inter-Administrative Domains: Is It Viable?

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

Middleware for timely and reliable data dissemination is a fundamental building block of the Event Driven Architecture (EDA) which is the ideal platform for developing a large class of sense-and-react applications such as air traffic control, defense systems etc. Many of these middlewares are compliant to the Data Distribution Service (DDS) specification and they have been traditionally designed to be deployed on strictly controlled and managed environments where they show predictable behaviors and performance. However, the enterprise setting is far from being managed as it can be characterized by geographic inter-domain scale and heterogeneous resources. In this paper we present a study aimed at assessing the strengths and weaknesses of a commercial DDS implementation deployed on an unmanaged setting. Our experiments campaign outlines that, if the application manages a small number of homogeneous resources, this middleware can perform timely and reliably as long as there is no event fragmentation at the network transport level. In a more general setting with fragmentation and heterogeneous resources, reliability and timeliness rapidly degenerate pointing out a clear need of research in the field of self-configuring scalable event dissemination with QoS guarantee on unmanaged settings.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... The QoS policies enable multilevel configuration and allow higher performance of DDS than the other types of middleware [16]. Many studies [17][18][19][20][21][22][23][24][25][26][27][28][29][30][31][32], have shown how QoS policies can be used to enhance the performance of DDS-based systems in terms of various performance criteria, such as reliability, throughput, latency, and jitter. However, only a few studies have pointed out the limitation of QoS policy adjustment [21,26]. ...
... In the literature, there are two types of approaches for tuning the performance in a DDS-based system. The first type of approach aims to tune the QoS policies based on a guideline provided by human experts [17][18][19][20][21][22][23][24][25][26][27][28][29][30][31][32]. The second type of approach aims to tune the sending rates of the publishers automatically under a unicast communication architecture [36]. ...
... Many studies [17][18][19][20][21][22][23][24][25][26][27][28][29][30][31][32] have applied the QoS policies compliant with the standard specification [1], and showed that some configurations of QoS policies can improve the performance of a DDS-based system, such as applying the RELIABILITY and HISTORY QoS to optimize reliability [23,25]. Those existing studies have demonstrated that QoS policy adjustment can improve the performance of DDSbased systems in a wide variety of performance criteria. ...
Article
Full-text available
Data distribution service (DDS) is a communication middleware that has been widely used in various mission-critical systems. DDS supports a set of attributes and quality of service (QoS) policies that can be tuned to guarantee important performance factors in mission-critical systems message delivery (communication), such as reliability and throughput. However, optimizing reliability and throughput simultaneously in a DDS-based system is challenging. Adjusting the publisher’s sending rate is a direct approach to control the performance of a DDS-based system, but to the best of our knowledge, only a few research have examined this approach. In this study, we proposed a novel algorithm that adjusts the sending rate of each publisher to optimize the message delivery reliability and throughput of a DDS-based system. We also developed a DDS-based system model and use the model to define topic-based reliability and throughput. According to our experimental results, the proposed algorithm achieves a system communication reliability of 99–99.99%, given three scenarios of different reliability issues (70–99.99% reliability). Most importantly, the proposed algorithm can slightly increase per-topic throughput while improving per-topic reliability.
... However, the standard offers the basis for security and privacy: it supports the concept of administrative domains that allow the system to separate and confine the distribution of different data flows, limiting the visibility of publishers, subscribers, events, and subscriptions within a single domain. Finally, note that the above mentioned DDS properties can be effectively applied only when the DDS is deployed in a strictly controlled setting (i.e., in a managed environment); in a large scale, unreliable and unmanaged context, the performance obtainable by the DDS may become unpredictable [3]. Java Message Service (JMS) [20] is a standard promoted by Sun Microsystems to define a Java API for the implementation of message-oriented middleware . ...
Article
Full-text available
Up to now Service Oriented Architectures and Event Driven Architectures have been considered as competing parties striving to conquer the crown of the standard paradigm for the implementation of complex distributed applications. Todays we are witnesses of large efforts to merge both paradigms and give birth to a new generation of middleware platforms that will inherit the best of both worlds. In this paper we describe how this marriage could be leveraged in order to design new dependable software systems.
... No total ordering reflecting real-time event generation is ensured for events originated from multiple and heterogeneous sources. In addition, its guaranteed QoS properties can be effectively applied only when the DDS is deployed in a strictly controlled setting (i.e., in a managed environment ); in a large scale, unreliable and unmanaged context as collaborative event detection environments can be, the performance obtainable by the DDS may become unpre- dictable [10], thus compromising the possibility to support high throughput CEPD systems. ...
Article
Full-text available
Most distributed applications today receive events, process them and in turn create new events which are sent to other processes. Business intelligence, air traffic control, collab-orative security, complex system software management are examples of such applications. In these applications basic events, potentially occurred at different sites, are correlated in order to detect complex event patterns formed by basic events that could have temporal and spatial relationships among them. In this context, a fundamental functionality is the data dissemination that brings events from event pro-ducers to event consumers where complex event patterns are detected. In this paper we discuss the characteristics that a Data Dissemination service should have in order to sup-port in the best way the complex event pattern detection functionality. We consider event traffic can reach thousands of events per second coming from different event sources; that is, the data dissemination service has to sustain high throughput. Finally, we present an assessment of a number of technologies that can be used to disseminate data in the earlier mentioned context, discussing scenarios where those technologies can be effectively deployed.
Chapter
Storing information in a software system is challenging, especially in the cloud computing era. Traditional, battle-tested methods, like Object Relational Mapping, seem inappropriate for all cases, but other alternatives feel too complicated to implement. Moreover, new software design methodologies, like Domain-Driven Design, provide alternative modeling tools for software systems, which in contrast to Object-Oriented Design, focus more on Domain-Events and their processing. Additionally, there is an increasing interest in causality when implementing software systems, mainly as a matter of accountability, tracing, and other similar properties, especially in the context of e-Government, in order to support transparency. We are now interested in a system’s state history and its current state, and that is no longer the exception. Therefore, this paper focuses on Object Relational Mapping and Event-Sourcing trends for the past years as two alternatives for storing applications data through a systematic literature review. We evaluate how these two alternatives have evolved over the years under the prism of academic literature and discuss our findings, according to modern application development requirements.
Chapter
Integrating real-time RFID data into autonomous and heterogeneous information systems across the business value chain presents a number of challenges. At an abstract architecture level, this paper identifies important requirements for RFID data provisioning and points of integration. A non-invasive architecture style is proposed to satisfy these requirements. It has the advantages of low entry barriers, low latency, high flexibility, and independent evolvability. The architecture style is used as a basis for evaluating three existing architectures for RFID data provisioning. Various architecture mismatches that could hinder the pace of RFID adoption are identified and discussed. A new asymmetric integration approach is suggested as an alternative to existing methods.
Article
Full-text available
The analysis, and eventual approval or rejection, of new enterprise information technology (IT) initiatives often proceeds on the basis of informal estimates of return on investment. Investment in new IT initiatives includes the costs of hardware, software licenses, application development tailored to the enterprise, and maintenance. Returns are typically estimated informally in terms of cost savings or revenue increases. This paper makes the case for evaluating certain IT investments in the same way as investments in factories and other resources have been evaluated for decades. Just as industrial factories create value by transforming raw materials into finished products, some IT investments, which we call “information factories”, create value by transforming raw information (events) into structured data (and possibly actions based on that data). The return on investment is estimated by the difference between the economic value of the structured data and concomitant actions (the “finished product”) and that of the data available within the enterprise, from its partners and customers, and from the Internet (the “raw materials”). This paper introduces the concept of the information factory, and explores design considerations for maximizing the economic efficiency of information factories.
Article
Is this the future for scientific analysis?
Open architecture publish-subscribe benchmarking
  • B Mccormick
  • L Madden
B. McCormick and L. Madden. Open architecture publish-subscribe benchmarking. In OMG Real-Time Embedded System WorkShop 2005, 2005.
Data distribution service for real-time systems specification
  • O M Group
O. M. Group. Data distribution service for real-time systems specification, 2002.
Java message service api rev 1
  • S M Inc
S. M. Inc. Java message service api rev 1.1, 2002.
Event-driven applications: Costs, benefits and design approaches
  • K M Chandy
K. M. Chandy. Event-driven applications: Costs, benefits and design approaches. In Gartner Application Integration and Web Services Summit, 2006.
CORBA event service speci-fication, version 1.1. OMG Document formal
  • Management Object
  • Group
CORBA notification service specification, version 1.0.1. OMG Document formal
  • Management Object
  • Group