[Show abstract][Hide abstract] ABSTRACT: Object detection and classification are the basic
tasks in video analytics and become the starting point for other
complex applications. Traditional video analytics approaches are
manual and time consuming. These are subjective due to the very
involvement of human factor. We present a cloud based video
analytics framework for scalable and robust analysis of video
streams. The framework empowers an operator by automating
the object detection and classification process from recorded
video streams. An operator only specifies an analysis criteria
and duration of video streams to analyse. The streams are then
fetched from a cloud storage, decoded and analysed on the cloud.
The framework executes compute intensive parts of the analysis
to GPU powered servers in the cloud. Vehicle and face detection
are presented as two case studies for evaluating the framework,
with one month of data and a 15 node cloud. The framework
reliably performed object detection and classification on the data,
comprising of 21,600 video streams and 175 GB in size, in 6.52
hours. The GPU enabled deployment of the framework took 3
hours to perform analysis on the same number of video streams,
thus making it at least twice as fast than the cloud deployment
Full-text · Article · Jan 2016 · IEEE Transactions on Cloud Computing
[Show abstract][Hide abstract] ABSTRACT: Business Intelligence (BI) has gained a new lease of life through Cloud computing as its demand for unlimited hardware and platform resources expandability is fulfilled by the Cloud elasticity features. BI can be seamlessly deployed on the Cloud given that its multilayered model coincides with the Cloud models. It is considered by many Cloud service providers as one of the prominent applications services on public, outsourced private and outsourced community Clouds. However, in the shared domains of Cloud computing, BI is exposed to security and privacy threats by virtue of exploits, eavesdropping, distributed attacks, malware attacks, and such other known challenges on Cloud computing. Given the multi-layered model of BI and Cloud computing, its protection on Cloud computing needs to be ensured through multilayered controls. This paper proposed a multi-layered hierarchical inter-cloud connectivity model for sequential packet inspection of tenant sessions accessing BI as a service on cloud computing through an algorithm for ensuring multi-level session inspections, and ensuring maximum security controls at all the seven layers, and prevent an attack from occurring. The simulations present the effects of distributed attacks on the BI systems by attackers posing as genuine Cloud tenants. The results reflect how the attackers are blocked by the multilayered security and privacy controls deployed for protecting the BI servers and databases.
[Show abstract][Hide abstract] ABSTRACT: Alongside the healthy development of the Cloud-based technologies across various application deployments, their associated energy consumptions incurred by the excess usage of Information and Communication Technology (ICT) resources, is one of the serious concerns demanding effective solutions with immediate effect. Effective auto scaling of the Cloud resources in accordance to the incoming user demand and thereby reducing the idle resources is one optimum solution which not only reduces the excess energy consumptions but also helps maintaining the Quality of Service (QoS). Whilst achieving such tasks, estimating the user demand in advance with reliable level of accuracy has become an integral and vital component. With this in mind, this research work is aimed at analyzing the Cloud workloads and further evaluating the performances of two widely used prediction techniques such as Markov modelling and Bayesian modelling with 7 hours of Google cluster data. An important outcome of this research work is the categorization and characterization of the Cloud workloads which will assist leading into the user demand prediction parameter modelling.
[Show abstract][Hide abstract] ABSTRACT: The goal of Opportunistic Networks (OppNets) is to enable message transmission in an infrastructure less environment where a reliable end-to-end connection between the hosts in not possible at all times. The role of OppNets is very crucial in today's communication as it is still not possible to build a communication infrastructure in some geographical areas including mountains, oceans and other remote areas. Nodes participating in the message forwarding process in OppNets experience frequent disconnections. The employment of an appropriate routing protocol to achieve successful message delivery is one of the desirable requirements of OppNets. Routing challenges are very complex and evident in OppNets due to the dynamic nature and the topology of the intermittent networks. This adds more complexity in the choice of the suitable protocol to be employed in opportunistic scenarios, to enable message forwarding. With this in mind, the aim of this paper is to analyze a number of algorithms under each class of routing techniques that support message forwarding in OppNets and to compare those studied algorithms in terms of their performances, forwarding techniques, outcomes and success rates. An important outcome of this paper is the identifying of the optimum routing protocol under each class of routing.
[Show abstract][Hide abstract] ABSTRACT: Wireless networks are an integral part of day-to-day life for many people, with businesses and home users relying on them for connectivity and communication. This paper examines the problems relating to the topic of wireless security and the background literature. Following this, primary research has been undertaken that focuses on the current trend of wireless security. Previous work is used to create a timeline of encryption usage and helps to exhibit the differences between 2009 and 2012. Moreover, a novel 802.11 denial-of-service device has been created to demonstrate the way in which it is possible to design a new threat based on current technologies and equipment that is freely available. The findings are then used to produce recommendations that present the most appropriate countermeasures to the threats found.
No preview · Article · Apr 2014 · Wireless Personal Communications
[Show abstract][Hide abstract] ABSTRACT: Online social networks (OSNs) are immensely prevalent and have now become a ubiquitous and important part of the modern, developed society. However, online social networks pose significant problems to digital forensic investigators who have no experience online. Data will reside on multiples of servers in multiple countries, across multiple jurisdictions. Capturing it before it is overwritten or deleted is a known problem, mirrored in other cloud based services. In this article, a novel method has been developed for the extraction, analysis, visualization, and comparison of snapshotted user profile data from the online social network Twitter. The research follows a process of design, implementation, simulation, and experimentation. Source code of the tool that was developed to facilitate data extraction has been made available on the Internet.
No preview · Article · Feb 2014 · Sciece China. Information Sciences
[Show abstract][Hide abstract] ABSTRACT: Peer-to-peer networks are becoming increasingly popular as a method of creating highly scalable and robust distributed systems. To address performance issues when scaling traditional unstructured protocols to large network sizes many protocols have been proposed which make use of distributed hash tables to provide a decentralised and robust routing table. This paper investigates the most significant structured distributed hash table (DHT) protocols through a comparative literature review and critical analysis of results from controlled simulations. This paper discovers several key design differences, resulting in pastry performing best in every test. Chord performs worst, mostly attributed to its unidirectional distance metric, while significant generation of maintenance messages hold Kademila back in bandwidth tests.
No preview · Article · Jan 2014 · International Journal of Embedded Systems
[Show abstract][Hide abstract] ABSTRACT: In self-hosted environments it was feared that Business Intelligence (BI) will eventually face a resource crunch situation due to the never ending expansion of data warehouses and the online analytical processing (OLAP) demands on the underlying networking. Cloud computing has instigated a new hope for future prospects of BI. However, how will BI be implemented on Cloud and how will the traffic and demand profile look like? This research attempts to answer these key questions in regards to taking BI to the Cloud. The Cloud hosting of BI has been demonstrated with the help of a simulation on OPNET comprising a Cloud model with multiple OLAP application servers applying parallel query loads on an array of servers hosting relational databases. The simulation results reflected that extensible parallel processing of database servers on the Cloud can efficiently process OLAP application demands on Cloud computing.
No preview · Article · Jan 2014 · Journal of Computer and System Sciences
[Show abstract][Hide abstract] ABSTRACT: The concept behind cloud computing is to facilitate a wider distribution of hardware and software based services in the form of a consolidated infrastructure of various computing enterprises. In practice, cloud computing could be seen as an environment that combines cluster and grid characteristics that have been integrated into a single setting. Currently, cloud computing resources are limited in interoperability, mainly because of the homogeneity and the coherency of their resources. However, the number of users demanding cloud services has increased dramatically, thus, the need for collaborative clouds has increased as well. Therefore, various cases arise from this concerning the scalability and customisability when managing multi-tenancy. Here, we present an algorithmic model for managing the interoperability of the cloud environment, namely inter-cloud. So, we integrate our theoretical approach from the scope of orchestrating job execution in a distributed setting.
Full-text · Article · Sep 2013 · International Journal of High Performance Computing and Networking
[Show abstract][Hide abstract] ABSTRACT: Highly dynamic overlay networks have a native ability to adapt their topology through rewiring to resource location and migration. However, this characteristic is not fully exploited in distributed resource discovery algorithms of nomadic resources. Recent and emergent computing paradigms (e.g. agile, nomadic, cloud, peer-to-peer computing) increasingly assume highly intermittent and nomadic resources shared over large-scale overlays. This work presents a discovery mechanism, Stalkers (and its three versions—Floodstalkers, Firestalkers, kk-Stalkers), that is able to cooperatively extract implicit knowledge embedded within the network topology and quickly adapt to any changes of resource locations. Stalkers aims at tracing resource migrations by only following links created by recent requestors. This characteristic allows search paths to bypass highly congested nodes, use collective knowledge to locate resources, and quickly respond to dynamic environments. Numerous experiments have shown higher success rate and stable performance compared to other related blind search mechanisms. More specifically, in fast changing topologies, the Firestalkers version exhibits good success rate, and low latency and cost in messages compared to other mechanisms.
No preview · Article · Aug 2013 · Future Generation Computer Systems
[Show abstract][Hide abstract] ABSTRACT: Massively Multiplayer Online Games are networked games that allow a large number of people to play together. Classically MMOG worlds are hosted on many powerful servers and players that move around the world are passed from server to server as they pass through the environment. Running a large number of servers can be challenging and there are many considerations for a developer who wants to create a game to enter the MMOG market. If it is possible to use a P2P network to host an MMOG successfully, the costs of running a server farm are taken out of the equation. This will allow for groups with small budgets to enter the MMOG market and help competition in the market place. In this paper, the methods for the design of P2P massively multiplayer game protocols have been presented. Performance bottlenecks have been evaluated and highlighted by using simulations. The business viability has also been discussed in this paper.
No preview · Article · Apr 2013 · Multimedia Tools and Applications
[Show abstract][Hide abstract] ABSTRACT: On recent years, much effort has been put in analyzing the performance of large-scale distributed systems like grids, clouds and inter-clouds with respect to a diversity of resources and user requirements. A common way to achieve this is by using simulation frameworks for evaluating novel models prior to the development of solutions in highly cost settings. In this work we focus on the SimIC simulation toolkit as an innovative discrete event driven solution to mimic the inter-cloud service formation, dissemination, and execution phases, processes that are bundled in the inter-cloud meta-scheduling (ICMS) framework. Our work has meta-inspired characteristics as we determine the inter-cloud as a decentralized and dynamic computing environment where meta-brokers actas distributed management nodes for dynamic and real-time decision making in an identical manner. To this extend, we study the performance of service distributions among clouds based on a variety of metrics (e.g. execution time and turnaround) when different heterogeneous inter-cloud topologies are taking place. We also explore the behavior of the ICMS for different user submissions in terms of their computational requirements. The aim is to produce the results for a benchmark analysis of clouds in order to serve future research efforts on cloud and inter-cloud performance evaluation as benchmarks. The results are diverse in terms of different performance metrics. Especially for the ICMS, an increased performance tendency is observed when the system scales to massive user requests. This implies the improvement on scalability and service elasticity figures.
[Show abstract][Hide abstract] ABSTRACT: This work covers the inter-cloud meta-scheduling system that encompasses the essential components of the interoperable cloud setting for wide service dissemination. The study herein illustrates a set of distributed and decentralized operations by highlighting meta-computing characteristics. This is achieved by using meta-brokers that determine a middle-standing component for orchestrating the decision making process in order to select the most appropriate datacenter resource among collaborated clouds. The selection is based on heuristic performance criteria (e.g. the service execution time, latency, energy efficiency etc.). Our solution is more advanced when compared to conventional centralized schemes, as it offers robust real-time scalable, elastic and flexible service scheduling in a fully decentralized and dynamic manner. Similarly, issues related with bottleneck on multiple service requests, heterogeneity, information exposition and consideration of variation of workloads are of prime focus. In view of that, the whole process is based upon random service requests from users that are clients of a sub-cloud of an inter-cloud datacenter and access is done via a meta-broker. The inter-cloud facility distributes the request for service by enclosing each personalized service into a host virtual machine. The study presents a detailed discussion of the algorithmic model for demonstrating the whole service dissemination, allocation, execution and monitoring process along with the preliminary implementation and configuration on a proposed SimIC simulation framework.
[Show abstract][Hide abstract] ABSTRACT: 'Simulating the Inter-Cloud' (SimIC) is a discrete event simulation toolkit based on the process oriented simulation package of SimJava. The SimIC aims of replicating an inter-cloud facility wherein multiple clouds collaborate with each other for distributing service requests with regards to the desired simulation setup. The package encompasses the fundamental entities of the inter-cloud meta-scheduling algorithm such as users, meta-brokers, local-brokers, datacenters, hosts, hyper visors and virtual machines (VMs). Additionally, resource discovery and scheduling policies together with VMs allocation, re-scheduling and VM migration strategies are included as well. Using the SimIC a modeler can design a fully dynamic inter-cloud setting wherein collaboration is founded on meta-scheduling inspired characteristics of distributed resource managers that exchange user requirements as driven events in real-time simulations. The SimIC aims of achieving interoperability, flexibility and service elasticity while at the same time introducing the notion of heterogeneity of multiple clouds' configurations. In addition it accepts an optimization of a variety of selected performance criteria for a diversity of entities. The crucial factor of dynamics consideration has implemented by allowing reactive orchestration based on current workload of already executed heterogeneous user specifications. These are in the form of text files that the modeler can load in the toolkit and occurs in real-time at different simulation intervals. Finally, a unique request is scheduled for execution to an internal cloud datacenter host VM that is capable of performing the service contract. This is formally designed in Service Level Agreements (SLAs) based upon user profiling.
[Show abstract][Hide abstract] ABSTRACT: Peer-to-Peer (P2P) networking is an alternative to the cloud computing for relatively more informal trade. One of the major obstacles to its development is the free riding problem, which significantly degrades the scalability, fault tolerance and content availability of the systems. Bartering exchange ring based incentive mechanism is one of the most common solutions to this problem. It organizes the users with asymmetric interests in the bartering exchange rings, enforcing the users to contribute while consuming. However the existing bartering exchange ring formation approaches have inefficient and static limitations. This paper proposes a novel cluster based incentive mechanism (CBIM) that enables dynamic ring formation by modifying the Query Protocol of underlying P2P systems. It also uses a reputation system to alleviate malicious behaviors. The users identify free riders by fully utilizing their local transaction information. The identified free riders are blacklisted and thus isolated. The simulation results indicate that by applying the CBIM, the request success rate can be noticeably increased since the rational nodes are forced to become more cooperative and the free riding behaviors can be identified to a certain extent.
No preview · Article · Jan 2013 · Future Generation Computer Systems
[Show abstract][Hide abstract] ABSTRACT: With rapid advances in Internet technologies and increasing popularity of cyber social networks, the physical world and cyber world are gradually merging to form a new cyber-socio-physical society known as the Cyber Physical Society (CPS). In contrast to the previous research studies in cyber physical society, we are focusing on a different case of CPS - green IT in this paper. The complex cyber-physical systems of cloud bring unprecedented challenges in power resource managements. This paper looks at the literature behind virtualization and mainly virtual desktop infrastructure as the solution to these challenges. In this paper, we investigate how to use cutting-edge virtualisation technologies to reduce power consumption of IT infrastructure in Cyber Physical Society. This research and the implementation of a testing virtual desktop environment using VMware and Wyse technologies portray the clear improvements that hypervisor and desktop virtualization can be brought to IT infrastructures drawing particular attention to power consumption and the green incentives.
[Show abstract][Hide abstract] ABSTRACT: Business intelligence (BI) is a critical software system employed by the higher management of organizations for presenting business performance reports through Online Analytical Processing (OLAP) functionalities. BI faces sophisticated security issues given its strategic importance for higher management of business entities. Scholars have emphasized on enhanced session, presentation and application layer security in BI, in addition to the usual network and transport layer security controls. This is because an unauthorized user can gain access to highly sensitive consolidated business information in a BI system. To protect a BI environment, a number of controls are needed at the level of database objects, application files, and the underlying servers. In a cloud environment, the controls will be needed at all the components employed in the service-oriented architecture for hosting BI on the cloud. Hence, a BI environment (whether self-hosted or cloud-hosted) is expected to face significant security overheads. In this context, two models for securing BI on a cloud have been simulated in this paper. The first model is based on securing BI using a Unified Threat Management (UTM) cloud and the second model is based on distributed security controls embedded within the BI server arrays deployed throughout the Cloud. The simulation results revealed that the UTM model is expected to cause more overheads and bottlenecks per OLAP user than the distributed security model. However, the distributed security model is expected to pose administrative control effectiveness challenges than the UTM model. Based on the simulation results, it is recommended that BI security model on a Cloud should comprise of network, transport, session and presentation layers of security controls through UTM, and application layer security through the distributed security components. A mixed environment of both the models will ensure technical soundness of security controls, better security processes, - learly defined roles and accountabilities, and effectiveness of controls.
[Show abstract][Hide abstract] ABSTRACT: Virtualization offers flexible and rapid provisioning of the physical machines. Latest years isolation and migration of virtual machines have improved resource utilization as well as resource management techniques. This paper is focusing on the process of migration and leveraging virtual machine handling with this use of automation. We outline current trends and issues with regard to datacentre that apply policy-based automation. The automated solution will improve the efficiency of operations in the datacentre focusing mainly on improving resource handling as well as reducing power consumption. This could be proven particular useful in disaster management recovery wherein power supplies will need to be optimized. The focus of this work is on an approach for virtual machine and power management. We present a discussion to show such a functionality by using migration in order to reduce server sprawl, minimize power consumption and load balance across physical machines.