Preprint

Distributed Scheduling of Event Analytics across Edge and Cloud

Authors:
Preprints and early-stage research may not have been peer reviewed yet.
To read the file of this research, you can request a copy directly from the authors.

Abstract

Internet of Things (IoT) domains generate large volumes of high velocity event streams from sensors, which need to be analyzed with low latency to drive decisions. Complex Event Processing (CEP) is a Big Data technique to enable such analytics, and is traditionally performed on Cloud Virtual Machines (VM). Leveraging captive IoT edge resources in combination with Cloud VMs can offer better performance, flexibility and monetary costs for CEP. Here, we formulate an optimization problem for energy-aware placement of CEP queries, composed as an analytics dataflow, across a collection of edge and Cloud resources, with the goal of minimizing the end-to-end latency for the dataflow. We propose a Genetic Algorithm (GA) meta-heuristic to solve this problem, and compare it against a brute-force optimal algorithm (BF). We perform detailed real-world benchmarks on the compute, network and energy capacity of edge and Cloud resources. These results are used to define a realistic and comprehensive simulation study that validates the BF and GA solutions for 45 diverse CEP dataflows, LAN and WAN setup, and different edge resource availability. We compare the GA and BF solutions against random and Cloud-only baselines for different configurations, for a total of 1764 simulation runs. Our study shows that GA is within 97% of the optimal BF solution that takes hours, maps dataflows with 4 - 50 queries in 1 - 26 secs, and only fails to offer a feasible solution <= 20% of the time.

No file available

Request Full-text Paper PDF

To read the file of this research,
you can request a copy directly from the authors.

ResearchGate has not been able to resolve any citations for this publication.
Conference Paper
Full-text available
This paper compares various selection techniques used in Genetic Algorithm. Genetic algorithms are optimization search algorithms that maximize or minimizes given functions. Indentifying the appropriate selection technique is a critical step in genetic algorithm. The process of selection plays an important role in resolving premature convergence because it occurs due to lack of diversity in the population. Therefore selection of population in each generation is very important. In this study, we have reported the significant work conducted on various selection techniques and the comparison of selection techniques.
Article
Full-text available
This article focuses on a scalable software platform for the Smart Grid cyber-physical system using cloud technologies. Dynamic Demand Response (D2R) is a challenge-application to perform intelligent demand-side management and relieve peak load in Smart Power Grids. The platform offers an adaptive information integration pipeline for ingesting dynamic data; a secure repository for researchers to share knowledge; scalable machine-learning models trained over massive datasets for agile demand forecasting; and a portal for visualizing consumption patterns, and validated at the University of Southern California's campus microgrid. The article examines the role of clouds and their tradeoffs for use in the Smart Grid Cyber-Physical Sagileystem.
Article
Full-text available
A large number of distributed applications requires continuous and timely processing of information as it flows from the periphery to the center of the system. Examples include intrusion detection systems which analyze network traffic in real-time to identify possible attacks; environmental monitoring applications which process raw data coming from sensor networks to identify critical situations; or applications performing online analysis of stock prices to identify trends and forecast future values. Traditional DBMSs, which need to store and index data before processing it, can hardly fulfill the requirements of timeliness coming from such domains. Accordingly, during the last decade, different research communities developed a number of tools, which we collectively call Information flow processing (IFP) systems, to support these scenarios. They differ in their system architecture, data model, rule model, and rule language. In this article, we survey these systems to help researchers, who often come from different backgrounds, in understanding how the various approaches they adopt may complement each other. In particular, we propose a general, unifying model to capture the different aspects of an IFP system and use it to provide a complete and precise classification of the systems and mechanisms proposed so far.
Article
Full-text available
The world population is growing at a rapid pace. Towns and cities are accommodating half of the world's population thereby creating tremendous pressure on every aspect of urban living. Cities are known to have large concentration of resources and facilities. Such environments attract people from rural areas. However, unprecedented attraction has now become an overwhelming issue for city governance and politics. The enormous pressure towards efficient city management has triggered various Smart City initiatives by both government and private sector businesses to invest in ICT to find sustainable solutions to the growing issues. The Internet of Things (IoT) has also gained significant attention over the past decade. IoT envisions to connect billions of sensors to the Internet and expects to use them for efficient and effective resource management in Smart Cities. Today infrastructure, platforms, and software applications are offered as services using cloud technologies. In this paper, we explore the concept of sensing as a service and how it fits with the Internet of Things. Our objective is to investigate the concept of sensing as a service model in technological, economical, and social perspectives and identify the major open challenges and issues.
Conference Paper
Full-text available
Today there are so much data being available from sources like sensors (RFIDs, Near Field Communication), web activities, transactions, social networks, etc. Making sense of this avalanche of data requires efficient and fast processing. Processing of high volume of events to derive higher-level information is a vital part of taking critical decisions, and Complex Event Processing (CEP) has become one of the most rapidly emerging fields in data processing. e-Science use-cases, business applications, financial trading applications, operational analytics applications and business activity monitoring applications are some use-cases that directly use CEP. This paper discusses different design decisions associated with CEP Engines, and proposes some approaches to improve CEP performance by using more stream processing style pipelines. Furthermore, the paper will discuss Siddhi, a CEP Engine that implements those suggestions. We present a performance study that exhibits that the resulting CEP Engine--Siddhi--has significantly improved performance. Primary contributions of this paper are performing a critical analysis of the CEP Engine design and identifying suggestions for improvements, implementing those improvements through Siddhi, and demonstrating the soundness of those suggestions through empirical evidence.
Conference Paper
Full-text available
The coalition formation problem has received a considerable amount of attention in recent years. In this work we present a novel distributed algorithm that returns a solution in polynomial time and the quality of the returned solution increases as agents gain more experience. Our solution utilizes an underlying organization to guide the coalition formation process. We use reinforcement learning techniques to optimize decisions made locally by agents in the organization. Experimental results are presented, showing the potential of our approach.
Conference Paper
Full-text available
The application of Semantic Web technologies in an Electronic Commerce environment implies a need for good support tools. Fast query engines are needed for efficient querying of large amounts of data, usually represented using RDF. We focus on optimizing a special class of SPARQL queries, the so-called RDF chain queries. For this purpose, we devise a genetic algorithm called RCQ-GA that determines the order in which joins need to be performed for an efficient evaluation of RDF chain queries. The approach is benchmarked against a two-phase optimization algorithm, previously proposed in literature. The more complex a query is, the more RCQ-GA outperforms the benchmark in solution quality, execution time needed, and consistency of solution quality. When the algorithms are constrained by a time limit, the overall performance of RCQ-GA compared to the benchmark further improves.
Conference Paper
Full-text available
When an optimization problem is encoded using genetic al- gorithms, one must address issues of population size, cross- over and mutation operators and probabilities, stopping cri- teria, selection operator and pressure, and fitness function to be used in order to solve the problem. This paper tests a relationship between (1) crossover probability, (2) mutation probability, and (3) selection pressure using two problems. This relationship is based on the schema theorem proposed by Holland and reflects the fact that the choice of parameters and operators for genetic algorithms needs to be problem specific.
Article
Full-text available
Self-organizing structured peer-to-peer systems.
Article
Full-text available
Recent developments in database technology, such as deductive database systems, have given rise to the demand for new, cost-effective optimization techniques for join expressions. In this paper many different algorithms that compute approximate solutions for optimizing join orders are studied since traditional dynamic programming techniques are not appropriate for complex problems. First, two possible solution spaces, the space of left-deep and bushy processing trees, respectively, are evaluated from a statistical point of view. The result is that the common limitation to leftdeep processing trees is only advisable for certain join graph types. Basically, optimizers from three classes are analysed: heuristic, randomized and genetic algorithms. Each one is extensively scrutinized with respect to its working principle and its fitness for the desired application. It turns out that randomized and genetic algorithms are well suited for optimizing join expressions. They generate...
Conference Paper
Internet of Things (IoT) is a technology paradigm where millions of sensors monitor, and help inform or manage, physical, environmental and human systems in real-time. The inherent closed-loop responsiveness and decision making of IoT applications makes them ideal candidates for using low latency and scalable stream processing platforms. Distributed Stream Processing Systems (DSPS) are becoming essential components of any IoT stack, but the efficacy and performance of contemporary DSPS have not been rigorously studied for IoT data streams and applications. Here, we develop a benchmark suite and performance metrics to evaluate DSPS for streaming IoT applications. The benchmark includes 13 common IoT tasks classified across functional categories and forming micro-benchmarks, and two IoT applications for statistical summarization and predictive analytics that leverage various dataflow patterns of DSPS. These are coupled with stream workloads from real IoT observations on smart cities. We validate the benchmark for the popular Apache Storm DSPS, and present the results.
Article
Network latency prediction is important for server selection and quality-of-service estimation in real-time applications on the Internet. Traditional network latency prediction schemes attempt to estimate the latencies between all pairs of nodes in a network based on sampled round-trip times, through either Euclidean embedding or matrix factorization. However, these schemes become less effective in terms of estimating the latencies of personal devices, due to unstable and time-varying network conditions, triangle inequality violation and the unknown ranks of latency matrices. In this paper, we propose a matrix completion approach to network latency estimation. Specifically, we propose a new class of low-rank matrix completion algorithms, which predicts the missing entries in an extracted “network feature matrix” by iteratively minimizing a weighted Schatten-p norm to approximate the rank. Simulations on true low-rank matrices show that our new algorithm achieves better and more robust performance than multiple state-of-the-art matrix completion algorithms in the presence of noise. We further enhance latency estimation based on multiple “frames” of latency matrices measured in the past, and extend the proposed matrix completion scheme to the case of 3-D tensor completion. Extensive performance evaluations driven by real-world latency measurements collected from the Seattle platform show that our proposed approaches significantly outperform various state-of-the-art network latency estimation techniques, especially for networks that contain personal devices.
Article
The cloud is migrating to the edge of the network, where routers themselves may become the virtualisation infrastructure, in an evolution labelled as \the fog". However, many other complementary technologies are reaching a high level of maturity. Their interplay may dramatically shift the information and communication technology landscape in the following years, bringing separate technologies into a common ground. This paper o ers a comprehensive definition \the fog", comprehending technologies as diverse as cloud, sensor networks, peer-to-peer networks, network virtualisation functions or configuration management techniques. We highlight the main challenges faced by this potentially break-through technology amalgamation.
Article
In order to meet the increasing demand for high performance in smartphones, recent studies suggested mobile cloud computing techniques that aim to connect the phones to adjacent powerful cloud servers to throw their computational burden to the servers. These techniques often employ execution offloading schemes that migrate a process between machines during its execution. In execution offloading, code regions to be executed on the server are decided statically or dynamically based on the complex analysis on execution time and process state transfer costs of every region. Expectedly, the transfer cost is a deciding factor for the success of execution offloading. According to our analysis, it is dominated by the total size of heap objects transferred over the network. But previous work did not try hard to minimize this size. Thus in this paper, we introduce novel techniques based on compiler code analysis that effectively reduce the transferred data size by transferring only the essential heap objects and the stack frames actually referenced in the server. The experiments exhibit that the reduced size positively influences not only the transfer time itself but also the overall effectiveness of execution offloading, and ultimately, improves the performance of our mobile cloud computing significantly in terms of execution time and energy consumption.
Article
The first wearables DevCon for developers of wearables was held in San Francisco 5?7 March 2014 and was greeted with great enthusiasm, with more than 1,000 people attending the three-day event. With the introduction of Google Glass, Epson Moverio, Pebble, and Fitbit, wearables have certainly captured the attention of many consumers and enterprises. While some of these wearables mainly provide a single function, such as fitness tracking, health monitoring, and message display, others have taken on the integration of multiple functions in the same device; the BASIS watch, for example, combines time, fitness, and health-monitoring functions into a single device. According to a new market report published by Transparency Market Research, ?Wearable Technology Market?Global Scenario, Trends, Industry Analysis, Size, Share and Forecast, 2012?2018,? the global wearable technology market is expected to grow from US750millionin2012toUS750 million in 2012 to US5.8 billion in 2018. U.K.-based Juniper Research projects that the number of wearable devices shipped will rise from about 13 million in 2013 to 130 million in 2018, and the size of the market will jump from US1.4billionin2013toUS1.4 billion in 2013 to US19 billion in 2018. Business Insider Intelligence projects shipments of 100 million units in 2014 and forecasts the market will ultimately be worth about US$12 billion per year. Such widely divergent forecasts by research firms are typical when industries are in their relative infancies and hypergrowth mode.
Conference Paper
An increasingly large fraction of Internet services are hosted on a cloud computing system such as Amazon EC2 or Windows Azure. But to date, no in-depth studies about cloud usage by Internet services has been performed. We provide a detailed measurement study to shed light on how modern web service deployments use the cloud and to identify ways in which cloud-using services might improve these deployments. Our results show that: 4% of the Alexa top million use EC2/Azure; there exist several common deployment patterns for cloud-using web service front ends; and services can significantly improve their wide-area performance and failure tolerance by making better use of existing regional diversity in EC2. Driving these analyses are several new datasets, including one with over 34 million DNS records for Alexa websites and a packet capture from a large university network.
Conference Paper
Semantic Web allows us to model and query time-invariant or slowly evolving knowledge using ontologies. Emerging applications in Cyber Physical Systems such as Smart Power Grids that require continuous information monitoring and integration present novel opportunities and challenges for Semantic Web technologies. Semantic Web is promising to model diverse Smart Grid domain knowledge for enhanced situation awareness and response by multi-disciplinary participants. However, current technology does pose a performance overhead for dynamic analysis of sensor measurements. In this paper, we combine semantic web and complex event processing for stream based semantic querying. We illustrate its adoption in the USC Campus Micro-Grid for detecting and enacting dynamic response strategies to peak power situations by diverse user roles. We also describe the semantic ontology and event query model that supports this. Further, we introduce and evaluate caching techniques to improve the response time for semantic event queries to meet our application needs and enable sustainable energy management.
Article
In feed-following applications such as Twitter and Facebook, users (consumers) follow a large number of other users (producers) to get personalized feeds, generated by blending producers- feeds. With the proliferation of Cloud-connected smart edge devices such as smartphones, producers and consumers of many feed-following applications reside on edge devices and the Cloud. An important design goal of such applications is to minimize communication (and energy) overhead of edge devices. In this paper, we abstract distributed feed-following applications as a view maintenance problem, with the goal of optimally placing the views on edge devices and in the Cloud to minimize communication overhead between edge devices and the Cloud. The view placement problem for general network topology is NP Hard; however, we show that for the special case of Cloud-edge topology, locally optimal solutions yield a globally optimal view placement solution. Based on this powerful result, we propose view placement algorithms that are highly efficient, yet provably minimize global network cost. Compared to existing works on feed-following applications, our algorithms are more general--they support views with selection, projection, correlation (join) and arbitrary black-box operators, and can even refer to other views. We have implemented our algorithms within a distributed feed-following architecture over real smartphones and the Cloud. Experiments over real datasets indicate that our algorithms are highly scalable and orders-of-magnitude more efficient than existing strategies for optimal placement. Further, our results show that optimal placements generated by our algorithms are often several factors better than simpler schemes.
Article
Large-scale systems are part of a growing trend in distributed computing, and coordinating control of them is an increasing challenge. This paper presents a cooperative agent system that scales to one million or more nodes in which agents form coalitions to complete global task objectives. This approach uses the large-scale Command and Control C2 capabilities of the Resource Clustered Chord RC-Chord Hierarchical Peer-to-Peer HP2P design. Tasks are submitted that require access to processing, data, or hardware resources, and a distributed agent search is performed to recruit agents to satisfy the distributed task. This approach differs from others by incorporating design elements to accommodate large-scale systems into the resource location algorithm. Peersim simulations demonstrate that the distributed coalition formation algorithm is as effective as an omnipotent central algorithm in a one million agent system.
Conference Paper
The contribution of cloud computing and mobile computing technologies lead to the newly emerging mobile cloud computing paradigm. Three major approaches have been proposed for mobile cloud applications: 1) extending the access to cloud services to mobile devices; 2) enabling mobile devices to work collaboratively as cloud resource providers; 3) augmenting the execution of mobile applications on portable devices using cloud resources. In this paper, we focus on the third approach in supporting mobile data stream applications. More specifically, we study how to optimize the computation partitioning of a data stream application between mobile and cloud to achieve maximum speed/throughput in processing the streaming data. To the best of our knowledge, it is the first work to study the partitioning problem for mobile data stream applications, where the optimization is placed on achieving high throughput of processing the streaming data rather than minimizing the makespan of executions as in other applications. We first propose a framework to provide runtime support for the dynamic computation partitioning and execution of the application. Different from existing works, the framework not only allows the dynamic partitioning for a single user but also supports the sharing of computation instances among multiple users in the cloud to achieve efficient utilization of the underlying cloud resources. Meanwhile, the framework has better scalability because it is designed on the elastic cloud fabrics. Based on the framework, we design a genetic algorithm for optimal computation partition. Both numerical evaluation and real world experiment have been performed, and the results show that the partitioned application can achieve at least two times better performance in terms of throughput than the application without partitioning.
Article
Recent advances in cloud technology have turned the idea of cloud gaming into a reality. Cloud gaming, in its simplest form, renders an interactive gaming application remotely in the cloud and streams the scenes as a video sequence back to the player over the Internet. This is an advantage for less powerful computational devices that are otherwise incapable of running high-quality games. Such industrial pioneers as Onlive and Gaikai have seen success in the market with large user bases. In this article, we conduct a systematic analysis of state-of-the-art cloud gaming platforms, and highlight the uniqueness of their framework design. We also measure their real world performance with different types of games, for both interaction latency and streaming quality, revealing critical challenges toward the widespread deployment of cloud gaming.
Conference Paper
Particle Swarm Optimization (PSO) is a relatively recent heuristic search method whose mechanics are inspired by the swarming or collaborative behavior of biological populations. PSO is similar to the Genetic Algorithm (GA) in the sense that these two evolutionary heuristics are population-based search methods. In other words, PSO and the GA move from a set of points (population) to another set of points in a single iteration with likely improvement using a combination of deterministic and probabilistic rules. The GA and its many versions have been popular in academia and the industry mainly because of its intuitiveness, ease of implementation, and the ability to effectively solve highly nonlinear, mixed integer optimization problems that are typical of complex engineering systems. The drawback of the GA is its expensive computational cost. This paper attempts to examine the claim that PSO has the same effectiveness (finding the true global optimal solution) as the GA but with significantly better computational efficiency (less function evaluations) by implementing statistical analysis and formal hypothesis testing. The performance comparison of the GA and PSO is implemented using a set of benchmark test problems as well as two space systems design optimization problems, namely, telescope array configuration and spacecraft reliability-based design.
Article
The Internet of Things, an emerging global Internet-based technical architecture facilitating the exchange of goods and services in global supply chain networks has an impact on the security and privacy of the involved stakeholders. Measures ensuring the architecture's resilience to attacks, data authentication, access control and client privacy need to be established. An adequate legal framework must take the underlying technology into account and would best be established by an international legislator, which is supplemented by the private sector according to specific needs and thereby becomes easily adjustable. The contents of the respective legislation must encompass the right to information, provisions prohibiting or restricting the use of mechanisms of the Internet of Things, rules on IT-security-legislation, provisions supporting the use of mechanisms of the Internet of Things and the establishment of a task force doing research on the legal challenges of the IoT.
Conference Paper
In sensor networks, data acquisition frequently takes place at low-capability devices. The acquired data is then transmitted through a hierarchy of nodes having progressively increasing network band-width and computational power. We consider the problem of executing queries over these data streams, posed at the root of the hierarchy. To minimize data transmission, it is desirable to perform "in-network" query processing: do some part of the work at intermediate nodes as the data travels to the root. Most previous work on in-network query processing has focused on aggregation and inexpensive filters. In this paper, we address in-network processing for queries involving possibly expensive conjunctive filters, and joins. We consider the problem of placing operators along the nodes of the hierarchy so that the overall cost of computation and data transmission is minimized. We show that the problem is tractable, give an optimal algorithm, and demonstrate that a simpler greedy operator placement algorithm can fail to find the optimal solution. Finally we define a number of interesting variations of the basic operator placement problem and demonstrate their hardness.
Conference Paper
Mobile applications are becoming increasingly ubiquitous and provide ever richer functionality on mobile devices. At the same time, such devices often enjoy strong connectivity with more powerful machines ranging from laptops and desktops to commercial clouds. This paper presents the design and implementation of CloneCloud, a system that automatically transforms mobile applications to benefit from the cloud. The system is a flexible application partitioner and execution runtime that enables unmodified mobile applications running in an application-level virtual machine to seamlessly off-load part of their execution from mobile devices onto device clones operating in a computational cloud. CloneCloud uses a combination of static analysis and dynamic profiling to partition applications automatically at a fine granularity while optimizing execution time and energy use for a target computation and communication environment. At runtime, the application partitioning is effected by migrating a thread from the mobile device at a chosen point to the clone in the cloud, executing there for the remainder of the partition, and re-integrating the migrated thread back to the mobile device. Our evaluation shows that CloneCloud can adapt application partitioning to different environments, and can help some applications achieve as much as a 20x execution speed-up and a 20-fold decrease of energy spent on the mobile device.
Conference Paper
In this paper, we present the first study that examines the impact of application task mapping on the reliability of multiprocessor system-on-chip (MPSoC) in the presence of single-event upsets (SEUs). Based on this study, we propose a novel soft error-aware design optimization using joint power minimization through voltage scaling and reliability improvement through application task mapping. The aim is to minimize the number of SEUs experienced by the MPSoC for a suitably identified voltage scaling of the system processing cores such that the power is reduced and the real-time constraint is met. We evaluate the effectiveness of our technique using different applications, including an MPEG-2 video decoder and random task graphs. We show that for an MPEG-2 decoder with four processing cores, our technique produces a design that experiences 38% less SEUs than soft error-unaware design optimization for a soft error rate of 1e−9, while consuming 9% less power consumption and meeting a given real-time constraint. Furthermore, we investigate the impact of architecture allocation (varying the number of MPSoC cores) on the power consumption and SEUs experienced. We show that for an MPSoC with six processing cores and a given real-time constraint, the proposed technique experiences upto 7% less SEUs compared to soft error-unaware optimization, while consuming only 3% higher power.
Conference Paper
The education industry has a very poor record of productivity gains. In this brief article, I outline some of the ways the teaching of a college course in database systems could be made more efficient, and staff time used more productively. These ideas ...
Article
We examine the computational complexity of scheduling problems associated with a certain abstract model of a multiprocessing system. The essential elements of the model are a finite number of identical processors, a finite set of tasks to be executed, a partial order constraining the sequence in which tasks may be executed, a finite set of limited resources, and, for each task, the time required for its execution and the amount of each resource which it requires. We focus on the complexity of algorithms for determining a schedule which satisfies the partial order and resource usage constraints and which completes all required processing before a given fixed deadline. For certain special cases, it is possible to give such a scheduling algorithm which runs in low order polynomial time. However, the main results of this paper imply that almost all cases of this scheduling problem, even with only one resource, are NP-complete and hence are as difficult as the notorious traveling salesman problem.
Article
..............................................................................................................................................1 1. Introduction..............................................................................................................................2 2. The DAG Scheduling Problem ..............................................................................................6 2.1 The DAG Model .............................................................................................................6 2.2 Generation of a DAG .....................................................................................................8 2.3 Variations in the DAG Model ......................................................................................8 2.4 The Multiprocessor Model .........................................................................................10 3. NP-Completeness of the DAG Scheduling Problem ..........................................
Article
Complex Event Detection (CED) is emerging as a key capability for many monitoring applications such as intrusion detection, sensor-based activity & phenomena tracking, and network monitoring. Existing CED solutions commonly assume centralized availability and processing of all relevant events, and thus incur significant overhead in distributed settings. In this paper, we present and evaluate communication efficient techniques that can efficiently perform CED across distributed event sources. Our techniques are plan-based: we generate multi-step event acquisition and processing plans that leverage temporal relationships among events and event occurrence statistics to minimize event transmission costs, while meeting application-specific latency expectations. We present an optimal but exponential-time dynamic programming algorithm and two polynomial-time heuristic algorithms, as well as their extensions for detecting multiple complex events with common sub-expressions. We characterize the behavior and performance of our solutions via extensive experimentation on synthetic and real-world data sets using our prototype implementation.
Article
Complex event detection is an advanced form of data stream processing where the stream(s) are scrutinized to identify given event patterns. The challenge for many complex event processing (CEP) systems is to be able to evaluate event patterns on high-volume data streams while adhering to real-time constraints. To solve this problem, in this paper we present a hardware based complex event detection system implemented on field-programmable gate arrays (FPGAs). By inserting the FPGA directly into the data path between the network interface and the CPU, our solution can detect complex events at gigabit wire speed with constant and fully predictable latency, independently of network load, packet size or data distribution. This is a significant improvement over CPU based systems and an architectural approach that opens up interesting opportunities for hybrid stream engines that combine the flexibility of the CPU with the parallelism and processing power of FPGAs.
Network-aware query processing for stream-based applications
  • Y Ahmad
Y. Ahmad and U. Ç etintemel, "Network-aware query processing for stream-based applications," in International Conference on Very Large Data Bases, ser. VLDB. Toronto, Canada: VLDB Endowment, 2004, pp. 456-467.
The debs grand challenge
  • C Mutschler
  • H Ziekow
  • Z Jerzak
C. Mutschler, H. Ziekow, and Z. Jerzak, "The debs grand challenge," in International Conference on Distributed Event-based Systems, ser. DEBS. NY: ACM, 2013, pp. 289-294.
Partition and compose: Parallel complex event processing
  • M Hirzel
M. Hirzel, "Partition and compose: Parallel complex event processing," in ACM International Conference on Distributed Event-Based Systems, ser. DEBS. NY, USA: ACM, 2012, pp. 191-200.
Mobile cloud computing
  • N Fernando
  • S W Loke
  • W Rahayu
N. Fernando, S. W. Loke, and W. Rahayu, "Mobile cloud computing," Future Gener. Comput. Syst., vol. 29, no. 1, pp. 84-106, Jan. 2013.
Serendipity: Enabling remote computing among intermittently connected mobile devices
  • C Shi
  • V Lakafosis
  • M H Ammar
  • E W Zegura
C. Shi, V. Lakafosis, M. H. Ammar, and E. W. Zegura, "Serendipity: Enabling remote computing among intermittently connected mobile devices," in ACM International Symposium on Mobile Ad Hoc Networking and Computing, ser. MobiHoc. NY, USA: ACM, 2012, pp. 145-154.
Event processing across edge and the cloud for internet of things applications
  • N Govindarajan
  • Y Simmhan
  • N Jamadagni
  • P Misra
N. Govindarajan, Y. Simmhan, N. Jamadagni, and P. Misra, "Event processing across edge and the cloud for internet of things applications," in International Conference on Management of Data, ser. COMAD. Mumbai, India: Computer Society of India, 2014, pp. 101-104.
Iisc smart campus: Closing the loop from network to knowledge
  • Smartx
SmartX, "Iisc smart campus: Closing the loop from network to knowledge," 2016. [Online]. Available: http://smartx.cds.iisc.ac.in
Planetlab: An overlay testbed for broadcoverage services
  • B Chun
  • D Culler
  • T Roscoe
  • A Bavier
  • L Peterson
  • M Wawrzoniak
  • M Bowman
B. Chun, D. Culler, T. Roscoe, A. Bavier, L. Peterson, M. Wawrzoniak, and M. Bowman, "Planetlab: An overlay testbed for broadcoverage services," SIGCOMM Comput. Commun. Rev., vol. 33, no. 3, pp. 3-12, Jul. 2003.