Nick Antonopoulos

University of Derby, Derby, England, United Kingdom

Are you Nick Antonopoulos?

Claim your profile

Publications (91)20.13 Total impact

  • John Panneerselvam, Lu Liu, Nick Antonopoulos, Yuan Bo
    [Show abstract] [Hide abstract]
    ABSTRACT: Alongside the healthy development of the Cloud-based technologies across various application deployments, their associated energy consumptions incurred by the excess usage of Information and Communication Technology (ICT) resources, is one of the serious concerns demanding effective solutions with immediate effect. Effective auto scaling of the Cloud resources in accordance to the incoming user demand and thereby reducing the idle resources is one optimum solution which not only reduces the excess energy consumptions but also helps maintaining the Quality of Service (QoS). Whilst achieving such tasks, estimating the user demand in advance with reliable level of accuracy has become an integral and vital component. With this in mind, this research work is aimed at analyzing the Cloud workloads and further evaluating the performances of two widely used prediction techniques such as Markov modelling and Bayesian modelling with 7 hours of Google cluster data. An important outcome of this research work is the categorization and characterization of the Cloud workloads which will assist leading into the user demand prediction parameter modelling.
    2014 IEEE/ACM 7th International Conference on Utility and Cloud Computing, London; 12/2014
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: The goal of Opportunistic Networks (OppNets) is to enable message transmission in an infrastructure less environment where a reliable end-to-end connection between the hosts in not possible at all times. The role of OppNets is very crucial in today's communication as it is still not possible to build a communication infrastructure in some geographical areas including mountains, oceans and other remote areas. Nodes participating in the message forwarding process in OppNets experience frequent disconnections. The employment of an appropriate routing protocol to achieve successful message delivery is one of the desirable requirements of OppNets. Routing challenges are very complex and evident in OppNets due to the dynamic nature and the topology of the intermittent networks. This adds more complexity in the choice of the suitable protocol to be employed in opportunistic scenarios, to enable message forwarding. With this in mind, the aim of this paper is to analyze a number of algorithms under each class of routing techniques that support message forwarding in OppNets and to compare those studied algorithms in terms of their performances, forwarding techniques, outcomes and success rates. An important outcome of this paper is the identifying of the optimum routing protocol under each class of routing.
    EAI Endorsed Transactions on Industrial Networks and Intelligent Systems. 12/2014; 1(1):1-10.
  • [Show abstract] [Hide abstract]
    ABSTRACT: Wireless networks are an integral part of day-to-day life for many people, with businesses and home users relying on them for connectivity and communication. This paper examines the problems relating to the topic of wireless security and the background literature. Following this, primary research has been undertaken that focuses on the current trend of wireless security. Previous work is used to create a timeline of encryption usage and helps to exhibit the differences between 2009 and 2012. Moreover, a novel 802.11 denial-of-service device has been created to demonstrate the way in which it is possible to design a new threat based on current technologies and equipment that is freely available. The findings are then used to produce recommendations that present the most appropriate countermeasures to the threats found.
    Wireless Personal Communications: An International Journal. 04/2014; 75(3):1669-1687.
  • Hussain Al-Aqrabi, Lu Liu, Richard Hill, Nick Antonopoulos
    [Show abstract] [Hide abstract]
    ABSTRACT: In self-hosted environments it was feared that Business Intelligence (BI) will eventually face a resource crunch situation due to the never ending expansion of data warehouses and the online analytical processing (OLAP) demands on the underlying networking. Cloud computing has instigated a new hope for future prospects of BI. However, how will BI be implemented on Cloud and how will the traffic and demand profile look like? This research attempts to answer these key questions in regards to taking BI to the Cloud. The Cloud hosting of BI has been demonstrated with the help of a simulation on OPNET comprising a Cloud model with multiple OLAP application servers applying parallel query loads on an array of servers hosting relational databases. The simulation results reflected that extensible parallel processing of database servers on the Cloud can efficiently process OLAP application demands on Cloud computing.
    Journal of Computer and System Sciences 01/2014; · 1.00 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: SUMMARY The evolution of communication protocols, sensory hardware, mobile and pervasive devices, alongside social and cyber-physical networks, has made the Internet of things (IoT) an interesting concept with inherent complexities as it is realised. Such complexities range from addressing mechanisms to information management and from communication protocols to presentation and interaction within the IoT. Although existing Internet and communication models can be extended to provide the basis for realising IoT, they may not be sufficiently capable to handle the new paradigms that IoT introduces, such as social communities, smart spaces, privacy and personalisation of devices and information, modelling and reasoning. With interaction models in IoT moving from the orthodox service consumption model, towards an interactive conversational model, nature-inspired computational models appear to be candidate representations. Specifically, this research contests that the reactive and interactive nature of IoT makes chemical reaction-inspired approaches particularly well suited to such requirements. This paper presents a chemical reaction-inspired computational model using the concepts of graphs and reflection, which attempts to address the complexities associated with the visualisation, modelling, interaction, analysis and abstraction of information in the IoT. Copyright © 2013 John Wiley & Sons, Ltd.
    Concurrency and Computation Practice and Experience 09/2013; · 0.85 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: The concept behind cloud computing is to facilitate a wider distribution of hardware and software based services in the form of a consolidated infrastructure of various computing enterprises. In practice, cloud computing could be seen as an environment that combines cluster and grid characteristics that have been integrated into a single setting. Currently, cloud computing resources are limited in interoperability, mainly because of the homogeneity and the coherency of their resources. However, the number of users demanding cloud services has increased dramatically, thus, the need for collaborative clouds has increased as well. Therefore, various cases arise from this concerning the scalability and customisability when managing multi-tenancy. Here, we present an algorithmic model for managing the interoperability of the cloud environment, namely inter-cloud. So, we integrate our theoretical approach from the scope of orchestrating job execution in a distributed setting.
    International Journal of High Performance Computing and Networking 09/2013; 7(3):156-172.
  • Georgios Exarchakos, Nick Antonopoulos
    [Show abstract] [Hide abstract]
    ABSTRACT: Highly dynamic overlay networks have a native ability to adapt their topology through rewiring to resource location and migration. However, this characteristic is not fully exploited in distributed resource discovery algorithms of nomadic resources. Recent and emergent computing paradigms (e.g. agile, nomadic, cloud, peer-to-peer computing) increasingly assume highly intermittent and nomadic resources shared over large-scale overlays. This work presents a discovery mechanism, Stalkers (and its three versions—Floodstalkers, Firestalkers, kk-Stalkers), that is able to cooperatively extract implicit knowledge embedded within the network topology and quickly adapt to any changes of resource locations. Stalkers aims at tracing resource migrations by only following links created by recent requestors. This characteristic allows search paths to bypass highly congested nodes, use collective knowledge to locate resources, and quickly respond to dynamic environments. Numerous experiments have shown higher success rate and stable performance compared to other related blind search mechanisms. More specifically, in fast changing topologies, the Firestalkers version exhibits good success rate, and low latency and cost in messages compared to other mechanisms.
    Future Generation Computer Systems 08/2013; 29(6):1473–1484. · 2.64 Impact Factor
  • Kan Zhang, Nick Antonopoulos
    [Show abstract] [Hide abstract]
    ABSTRACT: Peer-to-Peer (P2P) networking is an alternative to the cloud computing for relatively more informal trade. One of the major obstacles to its development is the free riding problem, which significantly degrades the scalability, fault tolerance and content availability of the systems. Bartering exchange ring based incentive mechanism is one of the most common solutions to this problem. It organizes the users with asymmetric interests in the bartering exchange rings, enforcing the users to contribute while consuming. However the existing bartering exchange ring formation approaches have inefficient and static limitations. This paper proposes a novel cluster based incentive mechanism (CBIM) that enables dynamic ring formation by modifying the Query Protocol of underlying P2P systems. It also uses a reputation system to alleviate malicious behaviors. The users identify free riders by fully utilizing their local transaction information. The identified free riders are blacklisted and thus isolated. The simulation results indicate that by applying the CBIM, the request success rate can be noticeably increased since the rational nodes are forced to become more cooperative and the free riding behaviors can be identified to a certain extent.
    Future Generation Computer Systems - FGCS. 01/2013;
  • S. Sotiriadis, N. Bessis, P. Kuonen, N. Antonopoulos
    [Show abstract] [Hide abstract]
    ABSTRACT: This work covers the inter-cloud meta-scheduling system that encompasses the essential components of the interoperable cloud setting for wide service dissemination. The study herein illustrates a set of distributed and decentralized operations by highlighting meta-computing characteristics. This is achieved by using meta-brokers that determine a middle-standing component for orchestrating the decision making process in order to select the most appropriate datacenter resource among collaborated clouds. The selection is based on heuristic performance criteria (e.g. the service execution time, latency, energy efficiency etc.). Our solution is more advanced when compared to conventional centralized schemes, as it offers robust real-time scalable, elastic and flexible service scheduling in a fully decentralized and dynamic manner. Similarly, issues related with bottleneck on multiple service requests, heterogeneity, information exposition and consideration of variation of workloads are of prime focus. In view of that, the whole process is based upon random service requests from users that are clients of a sub-cloud of an inter-cloud datacenter and access is done via a meta-broker. The inter-cloud facility distributes the request for service by enclosing each personalized service into a host virtual machine. The study presents a detailed discussion of the algorithmic model for demonstrating the whole service dissemination, allocation, execution and monitoring process along with the preliminary implementation and configuration on a proposed SimIC simulation framework.
    Advanced Information Networking and Applications (AINA), 2013 IEEE 27th International Conference on; 01/2013
  • [Show abstract] [Hide abstract]
    ABSTRACT: Virtualization offers flexible and rapid provisioning of the physical machines. Latest years isolation and migration of virtual machines have improved resource utilization as well as resource management techniques. This paper is focusing on the process of migration and leveraging virtual machine handling with this use of automation. We outline current trends and issues with regard to datacentre that apply policy-based automation. The automated solution will improve the efficiency of operations in the datacentre focusing mainly on improving resource handling as well as reducing power consumption. This could be proven particular useful in disaster management recovery wherein power supplies will need to be optimized. The focus of this work is on an approach for virtual machine and power management. We present a discussion to show such a functionality by using migration in order to reduce server sprawl, minimize power consumption and load balance across physical machines.
    P2P, Parallel, Grid, Cloud and Internet Computing (3PGCIC), 2013 Eighth International Conference on; 01/2013
  • S. Sotiriadis, N. Bessis, N. Antonopoulos, A. Anjum
    [Show abstract] [Hide abstract]
    ABSTRACT: 'Simulating the Inter-Cloud' (SimIC) is a discrete event simulation toolkit based on the process oriented simulation package of SimJava. The SimIC aims of replicating an inter-cloud facility wherein multiple clouds collaborate with each other for distributing service requests with regards to the desired simulation setup. The package encompasses the fundamental entities of the inter-cloud meta-scheduling algorithm such as users, meta-brokers, local-brokers, datacenters, hosts, hyper visors and virtual machines (VMs). Additionally, resource discovery and scheduling policies together with VMs allocation, re-scheduling and VM migration strategies are included as well. Using the SimIC a modeler can design a fully dynamic inter-cloud setting wherein collaboration is founded on meta-scheduling inspired characteristics of distributed resource managers that exchange user requirements as driven events in real-time simulations. The SimIC aims of achieving interoperability, flexibility and service elasticity while at the same time introducing the notion of heterogeneity of multiple clouds' configurations. In addition it accepts an optimization of a variety of selected performance criteria for a diversity of entities. The crucial factor of dynamics consideration has implemented by allowing reactive orchestration based on current workload of already executed heterogeneous user specifications. These are in the form of text files that the modeler can load in the toolkit and occurs in real-time at different simulation intervals. Finally, a unique request is scheduled for execution to an internal cloud datacenter host VM that is capable of performing the service contract. This is formally designed in Service Level Agreements (SLAs) based upon user profiling.
    Advanced Information Networking and Applications (AINA), 2013 IEEE 27th International Conference on; 01/2013
  • S. Sotiriadis, N. Bessis, N. Antonopoulos
    [Show abstract] [Hide abstract]
    ABSTRACT: On recent years, much effort has been put in analyzing the performance of large-scale distributed systems like grids, clouds and inter-clouds with respect to a diversity of resources and user requirements. A common way to achieve this is by using simulation frameworks for evaluating novel models prior to the development of solutions in highly cost settings. In this work we focus on the SimIC simulation toolkit as an innovative discrete event driven solution to mimic the inter-cloud service formation, dissemination, and execution phases, processes that are bundled in the inter-cloud meta-scheduling (ICMS) framework. Our work has meta-inspired characteristics as we determine the inter-cloud as a decentralized and dynamic computing environment where meta-brokers actas distributed management nodes for dynamic and real-time decision making in an identical manner. To this extend, we study the performance of service distributions among clouds based on a variety of metrics (e.g. execution time and turnaround) when different heterogeneous inter-cloud topologies are taking place. We also explore the behavior of the ICMS for different user submissions in terms of their computational requirements. The aim is to produce the results for a benchmark analysis of clouds in order to serve future research efforts on cloud and inter-cloud performance evaluation as benchmarks. The results are diverse in terms of different performance metrics. Especially for the ICMS, an increased performance tendency is observed when the system scales to massive user requests. This implies the improvement on scalability and service elasticity figures.
    Advanced Information Networking and Applications Workshops (WAINA), 2013 27th International Conference on; 01/2013
  • H. Al-Aqrabi, Lu Liu, R. Hill, ZhiJun Ding, N. Antonopoulos
    [Show abstract] [Hide abstract]
    ABSTRACT: Business intelligence (BI) is a critical software system employed by the higher management of organizations for presenting business performance reports through Online Analytical Processing (OLAP) functionalities. BI faces sophisticated security issues given its strategic importance for higher management of business entities. Scholars have emphasized on enhanced session, presentation and application layer security in BI, in addition to the usual network and transport layer security controls. This is because an unauthorized user can gain access to highly sensitive consolidated business information in a BI system. To protect a BI environment, a number of controls are needed at the level of database objects, application files, and the underlying servers. In a cloud environment, the controls will be needed at all the components employed in the service-oriented architecture for hosting BI on the cloud. Hence, a BI environment (whether self-hosted or cloud-hosted) is expected to face significant security overheads. In this context, two models for securing BI on a cloud have been simulated in this paper. The first model is based on securing BI using a Unified Threat Management (UTM) cloud and the second model is based on distributed security controls embedded within the BI server arrays deployed throughout the Cloud. The simulation results revealed that the UTM model is expected to cause more overheads and bottlenecks per OLAP user than the distributed security model. However, the distributed security model is expected to pose administrative control effectiveness challenges than the UTM model. Based on the simulation results, it is recommended that BI security model on a Cloud should comprise of network, transport, session and presentation layers of security controls through UTM, and application layer security through the distributed security components. A mixed environment of both the models will ensure technical soundness of security controls, better security processes, - learly defined roles and accountabilities, and effectiveness of controls.
    Service Oriented System Engineering (SOSE), 2013 IEEE 7th International Symposium on; 01/2013
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Network virtualization is a promising solution that can prevent network ossification by allowing multiple heterogeneous virtual networks (VNs) to cohabit on a shared substrate network. It provides flexibility and promotes diversity. A key issue that ...
    Journal of Network and Systems Management 12/2012; 20(4). · 0.43 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Cloud computing is making a new revolution in the computing world. Elasticity is one of the key features of the cloud which makes cloud computing further fashionable. Cloud is still in its infant stages and it faces a wide range of security threats. The security downtime of cloud is relatively slowing down the spreading up of cloud computing. This paper focuses on the security threats faced by the weblets during their migration between the cloud and the mobile devices. Weblet migration is an important task during the implementation of elasticity in the cloud. As a primary step, a secure communication channel is designed for the weblet migration by deploying secure shell protocol. Herein, the vulnerabilities in some authentication mechanisms are highlighted and a better way of authenticating the weblets with SFTP (Secure File Transfer Protocol) is suggested. Finally, managing the traffic effectively in the designed channel with the aid of back pressure technique is also covered in this paper.
  • Athena Eftychiou, Bogdan Vrusias, Nick Antonopoulos
    [Show abstract] [Hide abstract]
    ABSTRACT: To build a scalable, robust and accurate P2P network, the network must be able to manage efficiently large amounts of information. This paper proposes a semantic-driven model where the network topology is adaptively shaped, based on the peers' semantic knowledge and the association between network size, peer connectivity and frequency of requested concepts. The proposed architecture follows a two-layer approach: the upper layer forms the semantic knowledge of the network through super-peers; the lower layer of peers represents the network resources. The network knowledge is formally represented by a domain-specific ontology using collective intelligence techniques. During the resource discovery process the query is intelligently routed in the semantic layer via ontology-supported decisions, achieving in this way, based on experimental results, higher query success and reduced network traffic. The proposed model has been experimentally evaluated and results show the semantic-driven network outperforms existing P2P networks.
    International Journal of Grid and Utility Computing 01/2012; 3(4):271-283.
  • A. Hinds, S. Sotiriadis, N. Bessis, N. Antonopoulos
    [Show abstract] [Hide abstract]
    ABSTRACT: This paper describes the need for a unified simulation framework which defines the simulation tools and configuration settings for researchers to perform comparative simulations and test the performance of security tools for the AODV MANET routing protocol. The key objectives of the proposed framework are to provide an unbiased, repeatable simulation environment which collects all important performance metrics and has a configuration optimized for the AODV protocols performance. Any security tool can then be simulated using the framework, showing the performance impact of the tool against the AODV baseline results or other security tools. The framework will control the network performance metrics and mobility models used in the simulation. It is anticipated that this framework will enable researchers to easily repeat experiments and directly compare results without configuration settings influencing the results.
    Emerging Intelligent Data and Web Technologies (EIDWT), 2012 Third International Conference on; 01/2012
  • S. Sotiriadis, N. Bessis, F. Xhafa, N. Antonopoulos
    [Show abstract] [Hide abstract]
    ABSTRACT: Cloud computing provides an efficient and flexible means for various services to meet the diverse and escalating needs of IT end-users. It offers novel functionalities including the utilization of remote services in addition to the virtualization technology. The latter feature offers an efficient method to harness the cloud power by fragmenting a cloud physical host in small manageable virtual portions. As a norm, the virtualized parts are generated by the cloud provider administrator through the hyper visor software based on a generic need for various services. However, several obstacles arise from this generalized and static approach. In this paper, we study and propose a model for instantiating dynamically virtual machines in relation to the current job characteristics. Following, we simulate a virtualized cloud environment in order to evaluate the model's dynamic-ness by measuring the correlation of virtual machines to hosts for certain job variations. This will allow us to compute the expected average execution time of various virtual machines instantiations per job length.
    Complex, Intelligent and Software Intensive Systems (CISIS), 2012 Sixth International Conference on; 01/2012
  • H. Al-Aqrabi, Lu Liu, R. Hill, N. Antonopoulos
    [Show abstract] [Hide abstract]
    ABSTRACT: Cloud computing is gradually gaining popularity among businesses due to its distinct advantages over self-hosted IT infrastructures. The software-as-a-service providers are serving as the primary interfacing to the business users community. However, the strategies and methods for hosting mission critical business intelligence (BI) applications on cloud is still being researched. BI is a highly resource intensive system requiring large scale parallel processing and significant storage capacities to host the data warehouses. OLAP (online analytical processing) is the user-end interface of BI that is designed to present multi-dimensional graphical reports to the end users. OLAP employs data cubes formed as a result of multidimensional queries run on an array of data warehouses. In self-hosted environments it was feared that BI will eventually face a resource crunch situation because it won't be feasible for companies to keep on adding resources to host the never ending expansion of data warehouses and the OLAP demands on the underlying networking. Cloud computing has instigated a new hope for future prospects of BI. But how will BI be implemented on cloud and how will the traffic and demand profile look like? This research has attempted to answer these key questions in this paper pertaining to taking BI to the cloud. The cloud hosting of BI has been demonstrated with the help of a simulation on OPNET comprising a cloud model with multiple OLAP application servers applying parallel query loads on an array of servers hosting relational databases. The simulation results have reflected that true and extensible parallel processing of database servers on the cloud can efficiently process OLAP application demands on cloud computing. Hence, the BI designer needs to plan for a highly partitioned database running on massively parallel database servers in which, each server hosts at least one partition of the underlying database serving the OLAP demands.
    High Performance Computing and Communication & 2012 IEEE 9th International Conference on Embedded Software and Systems (HPCC-ICESS), 2012 IEEE 14th International Conference on; 01/2012
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Ubiquitous computing environments are characterised by smart, interconnected artefacts embedded in our physical world that provide useful services to human inhabitants unobtrusively. Mobile devices are becoming the primary tools for human interaction with these embedded artefacts and for the utilisation of services available in smart computing environments such as clouds. Advancements in the capabilities of mobile devices allow a number of user and environment related context consumers to be hosted on these devices. Without a coordinating component, these context consumers and providers are a potential burden on device resources; specifically the effect of uncoordinated computation and communication with cloud-enabled services can negatively impact battery life. Therefore energy conservation is a major concern in realising the collaboration and utilisation of mobile device based context-aware applications and cloud based services. This paper presents the concept of a context-brokering component to aid in coordination and communication of context information between mobile devices and services deployed in a cloud infrastructure. A prototype context broker is experimentally analysed for effects on energy conservation when accessing and coordinating with cloud services on a smart device, with results signifying reduction in energy consumption.
    Journal of Ambient Intelligence and Humanized Computing 01/2012; abs/1202.5519.