Nick Antonopoulos

University of Derby, Derby, England, United Kingdom

Are you Nick Antonopoulos?

Claim your profile

Publications (95)40.51 Total impact

  • John Panneerselvam · Lu Liu · Nick Antonopoulos · Yuan Bo ·
    [Show abstract] [Hide abstract]
    ABSTRACT: Alongside the healthy development of the Cloud-based technologies across various application deployments, their associated energy consumptions incurred by the excess usage of Information and Communication Technology (ICT) resources, is one of the serious concerns demanding effective solutions with immediate effect. Effective auto scaling of the Cloud resources in accordance to the incoming user demand and thereby reducing the idle resources is one optimum solution which not only reduces the excess energy consumptions but also helps maintaining the Quality of Service (QoS). Whilst achieving such tasks, estimating the user demand in advance with reliable level of accuracy has become an integral and vital component. With this in mind, this research work is aimed at analyzing the Cloud workloads and further evaluating the performances of two widely used prediction techniques such as Markov modelling and Bayesian modelling with 7 hours of Google cluster data. An important outcome of this research work is the categorization and characterization of the Cloud workloads which will assist leading into the user demand prediction parameter modelling.
    2014 IEEE/ACM 7th International Conference on Utility and Cloud Computing, London; 12/2014
  • Source
    John Panneerselvam · Anthony Atojoko · Kim Smith · Lu Liu · Nick Antonopoulos ·
    [Show abstract] [Hide abstract]
    ABSTRACT: The goal of Opportunistic Networks (OppNets) is to enable message transmission in an infrastructure less environment where a reliable end-to-end connection between the hosts in not possible at all times. The role of OppNets is very crucial in today's communication as it is still not possible to build a communication infrastructure in some geographical areas including mountains, oceans and other remote areas. Nodes participating in the message forwarding process in OppNets experience frequent disconnections. The employment of an appropriate routing protocol to achieve successful message delivery is one of the desirable requirements of OppNets. Routing challenges are very complex and evident in OppNets due to the dynamic nature and the topology of the intermittent networks. This adds more complexity in the choice of the suitable protocol to be employed in opportunistic scenarios, to enable message forwarding. With this in mind, the aim of this paper is to analyze a number of algorithms under each class of routing techniques that support message forwarding in OppNets and to compare those studied algorithms in terms of their performances, forwarding techniques, outcomes and success rates. An important outcome of this paper is the identifying of the optimum routing protocol under each class of routing.
  • Lu Liu · Thomas Stimpson · Nick Antonopoulos · Zhijun Ding · Yongzhao Zhan ·
    [Show abstract] [Hide abstract]
    ABSTRACT: Wireless networks are an integral part of day-to-day life for many people, with businesses and home users relying on them for connectivity and communication. This paper examines the problems relating to the topic of wireless security and the background literature. Following this, primary research has been undertaken that focuses on the current trend of wireless security. Previous work is used to create a timeline of encryption usage and helps to exhibit the differences between 2009 and 2012. Moreover, a novel 802.11 denial-of-service device has been created to demonstrate the way in which it is possible to design a new threat based on current technologies and equipment that is freely available. The findings are then used to produce recommendations that present the most appropriate countermeasures to the threats found.
    Wireless Personal Communications 04/2014; 75(3):1669-1687. DOI:10.1007/s11277-013-1386-3 · 0.65 Impact Factor
  • Chris Howden · Lu Liu · ZhiYuan Li · JianXin Li · Nick Antonopoulos ·
    [Show abstract] [Hide abstract]
    ABSTRACT: Online social networks (OSNs) are immensely prevalent and have now become a ubiquitous and important part of the modern, developed society. However, online social networks pose significant problems to digital forensic investigators who have no experience online. Data will reside on multiples of servers in multiple countries, across multiple jurisdictions. Capturing it before it is overwritten or deleted is a known problem, mirrored in other cloud based services. In this article, a novel method has been developed for the extraction, analysis, visualization, and comparison of snapshotted user profile data from the online social network Twitter. The research follows a process of design, implementation, simulation, and experimentation. Source code of the tool that was developed to facilitate data extraction has been made available on the Internet.
    Sciece China. Information Sciences 02/2014; 57(3):1-20. DOI:10.1007/s11432-014-5069-9 · 0.85 Impact Factor
  • Alexander Betts · Lu Liu · Zhiyuan Li · Nick Antonopoulos ·
    [Show abstract] [Hide abstract]
    ABSTRACT: Peer-to-peer networks are becoming increasingly popular as a method of creating highly scalable and robust distributed systems. To address performance issues when scaling traditional unstructured protocols to large network sizes many protocols have been proposed which make use of distributed hash tables to provide a decentralised and robust routing table. This paper investigates the most significant structured distributed hash table (DHT) protocols through a comparative literature review and critical analysis of results from controlled simulations. This paper discovers several key design differences, resulting in pastry performing best in every test. Chord performs worst, mostly attributed to its unidirectional distance metric, while significant generation of maintenance messages hold Kademila back in bandwidth tests.
    International Journal of Embedded Systems 01/2014; 6(2/3):250 - 256. DOI:10.1504/IJES.2014.063823
  • Hussain Al-Aqrabi · Lu Liu · Richard Hill · Nick Antonopoulos ·
    [Show abstract] [Hide abstract]
    ABSTRACT: In self-hosted environments it was feared that Business Intelligence (BI) will eventually face a resource crunch situation due to the never ending expansion of data warehouses and the online analytical processing (OLAP) demands on the underlying networking. Cloud computing has instigated a new hope for future prospects of BI. However, how will BI be implemented on Cloud and how will the traffic and demand profile look like? This research attempts to answer these key questions in regards to taking BI to the Cloud. The Cloud hosting of BI has been demonstrated with the help of a simulation on OPNET comprising a Cloud model with multiple OLAP application servers applying parallel query loads on an array of servers hosting relational databases. The simulation results reflected that extensible parallel processing of database servers on the Cloud can efficiently process OLAP application demands on Cloud computing.
    Journal of Computer and System Sciences 01/2014; 81(1). DOI:10.1016/j.jcss.2014.06.013 · 1.14 Impact Factor
  • Source
    Stelios Sotiriadis · Nik Bessis · Nick Antonopoulos · Richard Hill ·
    [Show abstract] [Hide abstract]
    ABSTRACT: The concept behind cloud computing is to facilitate a wider distribution of hardware and software based services in the form of a consolidated infrastructure of various computing enterprises. In practice, cloud computing could be seen as an environment that combines cluster and grid characteristics that have been integrated into a single setting. Currently, cloud computing resources are limited in interoperability, mainly because of the homogeneity and the coherency of their resources. However, the number of users demanding cloud services has increased dramatically, thus, the need for collaborative clouds has increased as well. Therefore, various cases arise from this concerning the scalability and customisability when managing multi-tenancy. Here, we present an algorithmic model for managing the interoperability of the cloud environment, namely inter-cloud. So, we integrate our theoretical approach from the scope of orchestrating job execution in a distributed setting.
    International Journal of High Performance Computing and Networking 09/2013; 7(3):156-172. DOI:10.1504/IJHPCN.2013.056518
  • Ahsan Ikram · Ashiq Anjum · Richard Hill · Nick Antonopoulos · Lu Liu · Stelios Sotiriadis ·
    [Show abstract] [Hide abstract]
    ABSTRACT: SUMMARY The evolution of communication protocols, sensory hardware, mobile and pervasive devices, alongside social and cyber-physical networks, has made the Internet of things (IoT) an interesting concept with inherent complexities as it is realised. Such complexities range from addressing mechanisms to information management and from communication protocols to presentation and interaction within the IoT. Although existing Internet and communication models can be extended to provide the basis for realising IoT, they may not be sufficiently capable to handle the new paradigms that IoT introduces, such as social communities, smart spaces, privacy and personalisation of devices and information, modelling and reasoning. With interaction models in IoT moving from the orthodox service consumption model, towards an interactive conversational model, nature-inspired computational models appear to be candidate representations. Specifically, this research contests that the reactive and interactive nature of IoT makes chemical reaction-inspired approaches particularly well suited to such requirements. This paper presents a chemical reaction-inspired computational model using the concepts of graphs and reflection, which attempts to address the complexities associated with the visualisation, modelling, interaction, analysis and abstraction of information in the IoT. Copyright © 2013 John Wiley & Sons, Ltd.
    Concurrency and Computation Practice and Experience 09/2013; 27(8). DOI:10.1002/cpe.3131 · 1.00 Impact Factor
  • Georgios Exarchakos · Nick Antonopoulos ·
    [Show abstract] [Hide abstract]
    ABSTRACT: Highly dynamic overlay networks have a native ability to adapt their topology through rewiring to resource location and migration. However, this characteristic is not fully exploited in distributed resource discovery algorithms of nomadic resources. Recent and emergent computing paradigms (e.g. agile, nomadic, cloud, peer-to-peer computing) increasingly assume highly intermittent and nomadic resources shared over large-scale overlays. This work presents a discovery mechanism, Stalkers (and its three versions—Floodstalkers, Firestalkers, kk-Stalkers), that is able to cooperatively extract implicit knowledge embedded within the network topology and quickly adapt to any changes of resource locations. Stalkers aims at tracing resource migrations by only following links created by recent requestors. This characteristic allows search paths to bypass highly congested nodes, use collective knowledge to locate resources, and quickly respond to dynamic environments. Numerous experiments have shown higher success rate and stable performance compared to other related blind search mechanisms. More specifically, in fast changing topologies, the Firestalkers version exhibits good success rate, and low latency and cost in messages compared to other mechanisms.
    Future Generation Computer Systems 08/2013; 29(6):1473–1484. DOI:10.1016/j.future.2012.12.008 · 2.79 Impact Factor
  • Lu Liu · Andrew Jones · Nick Antonopoulos · Zhijun Ding · Yongzhao Zhan ·
    [Show abstract] [Hide abstract]
    ABSTRACT: Massively Multiplayer Online Games are networked games that allow a large number of people to play together. Classically MMOG worlds are hosted on many powerful servers and players that move around the world are passed from server to server as they pass through the environment. Running a large number of servers can be challenging and there are many considerations for a developer who wants to create a game to enter the MMOG market. If it is possible to use a P2P network to host an MMOG successfully, the costs of running a server farm are taken out of the equation. This will allow for groups with small budgets to enter the MMOG market and help competition in the market place. In this paper, the methods for the design of P2P massively multiplayer game protocols have been presented. Performance bottlenecks have been evaluated and highlighted by using simulations. The business viability has also been discussed in this paper.
    Multimedia Tools and Applications 04/2013; 74(8). DOI:10.1007/s11042-013-1662-y · 1.35 Impact Factor
  • Stelios Sotiriadis · Nik Bessis · Nick Antonopoulos ·
    [Show abstract] [Hide abstract]
    ABSTRACT: On recent years, much effort has been put in analyzing the performance of large-scale distributed systems like grids, clouds and inter-clouds with respect to a diversity of resources and user requirements. A common way to achieve this is by using simulation frameworks for evaluating novel models prior to the development of solutions in highly cost settings. In this work we focus on the SimIC simulation toolkit as an innovative discrete event driven solution to mimic the inter-cloud service formation, dissemination, and execution phases, processes that are bundled in the inter-cloud meta-scheduling (ICMS) framework. Our work has meta-inspired characteristics as we determine the inter-cloud as a decentralized and dynamic computing environment where meta-brokers actas distributed management nodes for dynamic and real-time decision making in an identical manner. To this extend, we study the performance of service distributions among clouds based on a variety of metrics (e.g. execution time and turnaround) when different heterogeneous inter-cloud topologies are taking place. We also explore the behavior of the ICMS for different user submissions in terms of their computational requirements. The aim is to produce the results for a benchmark analysis of clouds in order to serve future research efforts on cloud and inter-cloud performance evaluation as benchmarks. The results are diverse in terms of different performance metrics. Especially for the ICMS, an increased performance tendency is observed when the system scales to massive user requests. This implies the improvement on scalability and service elasticity figures.
    Advanced Information Networking and Applications Workshops (WAINA), 2013 27th International Conference on; 03/2013
  • Stelios Sotiriadis · Nik Bessis · Pierre Kuonen · Nick Antonopoulos ·
    [Show abstract] [Hide abstract]
    ABSTRACT: This work covers the inter-cloud meta-scheduling system that encompasses the essential components of the interoperable cloud setting for wide service dissemination. The study herein illustrates a set of distributed and decentralized operations by highlighting meta-computing characteristics. This is achieved by using meta-brokers that determine a middle-standing component for orchestrating the decision making process in order to select the most appropriate datacenter resource among collaborated clouds. The selection is based on heuristic performance criteria (e.g. the service execution time, latency, energy efficiency etc.). Our solution is more advanced when compared to conventional centralized schemes, as it offers robust real-time scalable, elastic and flexible service scheduling in a fully decentralized and dynamic manner. Similarly, issues related with bottleneck on multiple service requests, heterogeneity, information exposition and consideration of variation of workloads are of prime focus. In view of that, the whole process is based upon random service requests from users that are clients of a sub-cloud of an inter-cloud datacenter and access is done via a meta-broker. The inter-cloud facility distributes the request for service by enclosing each personalized service into a host virtual machine. The study presents a detailed discussion of the algorithmic model for demonstrating the whole service dissemination, allocation, execution and monitoring process along with the preliminary implementation and configuration on a proposed SimIC simulation framework.
    Advanced Information Networking and Applications (AINA), 2013 IEEE 27th International Conference on; 03/2013
  • Kan Zhang · Nick Antonopoulos ·
    [Show abstract] [Hide abstract]
    ABSTRACT: Peer-to-Peer (P2P) networking is an alternative to the cloud computing for relatively more informal trade. One of the major obstacles to its development is the free riding problem, which significantly degrades the scalability, fault tolerance and content availability of the systems. Bartering exchange ring based incentive mechanism is one of the most common solutions to this problem. It organizes the users with asymmetric interests in the bartering exchange rings, enforcing the users to contribute while consuming. However the existing bartering exchange ring formation approaches have inefficient and static limitations. This paper proposes a novel cluster based incentive mechanism (CBIM) that enables dynamic ring formation by modifying the Query Protocol of underlying P2P systems. It also uses a reputation system to alleviate malicious behaviors. The users identify free riders by fully utilizing their local transaction information. The identified free riders are blacklisted and thus isolated. The simulation results indicate that by applying the CBIM, the request success rate can be noticeably increased since the rational nodes are forced to become more cooperative and the free riding behaviors can be identified to a certain extent.
    Future Generation Computer Systems 01/2013; 29(1). DOI:10.1016/j.future.2011.06.005 · 2.79 Impact Factor
  • Hussain Al-Aqrabi · Lu Liu · Richard Hill · ZhiJun Ding · Nick Antonopoulos ·
    [Show abstract] [Hide abstract]
    ABSTRACT: Business intelligence (BI) is a critical software system employed by the higher management of organizations for presenting business performance reports through Online Analytical Processing (OLAP) functionalities. BI faces sophisticated security issues given its strategic importance for higher management of business entities. Scholars have emphasized on enhanced session, presentation and application layer security in BI, in addition to the usual network and transport layer security controls. This is because an unauthorized user can gain access to highly sensitive consolidated business information in a BI system. To protect a BI environment, a number of controls are needed at the level of database objects, application files, and the underlying servers. In a cloud environment, the controls will be needed at all the components employed in the service-oriented architecture for hosting BI on the cloud. Hence, a BI environment (whether self-hosted or cloud-hosted) is expected to face significant security overheads. In this context, two models for securing BI on a cloud have been simulated in this paper. The first model is based on securing BI using a Unified Threat Management (UTM) cloud and the second model is based on distributed security controls embedded within the BI server arrays deployed throughout the Cloud. The simulation results revealed that the UTM model is expected to cause more overheads and bottlenecks per OLAP user than the distributed security model. However, the distributed security model is expected to pose administrative control effectiveness challenges than the UTM model. Based on the simulation results, it is recommended that BI security model on a Cloud should comprise of network, transport, session and presentation layers of security controls through UTM, and application layer security through the distributed security components. A mixed environment of both the models will ensure technical soundness of security controls, better security processes, - learly defined roles and accountabilities, and effectiveness of controls.
    Service Oriented System Engineering (SOSE), 2013 IEEE 7th International Symposium on; 01/2013
  • Source
    Stelios Sotiriadis · Nik Bessis · Nick Antonopoulos · Ashiq Anjum ·
    [Show abstract] [Hide abstract]
    ABSTRACT: 'Simulating the Inter-Cloud' (SimIC) is a discrete event simulation toolkit based on the process oriented simulation package of SimJava. The SimIC aims of replicating an inter-cloud facility wherein multiple clouds collaborate with each other for distributing service requests with regards to the desired simulation setup. The package encompasses the fundamental entities of the inter-cloud meta-scheduling algorithm such as users, meta-brokers, local-brokers, datacenters, hosts, hyper visors and virtual machines (VMs). Additionally, resource discovery and scheduling policies together with VMs allocation, re-scheduling and VM migration strategies are included as well. Using the SimIC a modeler can design a fully dynamic inter-cloud setting wherein collaboration is founded on meta-scheduling inspired characteristics of distributed resource managers that exchange user requirements as driven events in real-time simulations. The SimIC aims of achieving interoperability, flexibility and service elasticity while at the same time introducing the notion of heterogeneity of multiple clouds' configurations. In addition it accepts an optimization of a variety of selected performance criteria for a diversity of entities. The crucial factor of dynamics consideration has implemented by allowing reactive orchestration based on current workload of already executed heterogeneous user specifications. These are in the form of text files that the modeler can load in the toolkit and occurs in real-time at different simulation intervals. Finally, a unique request is scheduled for execution to an internal cloud datacenter host VM that is capable of performing the service contract. This is formally designed in Service Level Agreements (SLAs) based upon user profiling.
    Advanced Information Networking and Applications (AINA), 2013 IEEE 27th International Conference on; 01/2013
  • D.-A. Dasilva · Stelios Sotiriadis · Richard Hill · Nick Antonopoulos · Nik Bessis ·
    [Show abstract] [Hide abstract]
    ABSTRACT: Virtualization offers flexible and rapid provisioning of the physical machines. Latest years isolation and migration of virtual machines have improved resource utilization as well as resource management techniques. This paper is focusing on the process of migration and leveraging virtual machine handling with this use of automation. We outline current trends and issues with regard to datacentre that apply policy-based automation. The automated solution will improve the efficiency of operations in the datacentre focusing mainly on improving resource handling as well as reducing power consumption. This could be proven particular useful in disaster management recovery wherein power supplies will need to be optimized. The focus of this work is on an approach for virtual machine and power management. We present a discussion to show such a functionality by using migration in order to reduce server sprawl, minimize power consumption and load balance across physical machines.
    P2P, Parallel, Grid, Cloud and Internet Computing (3PGCIC), 2013 Eighth International Conference on; 01/2013
  • Source
    Lu Liu · Osama Masfary · Nick Antonopoulos ·
    [Show abstract] [Hide abstract]
    ABSTRACT: The increasing trends of electrical consumption within data centres are a growing concern for business owners as they are quickly becoming a large fraction of the total cost of ownership. Ultra small sensors could be deployed within a data centre to monitor environmental factors to lower the electrical costs and improve the energy efficiency. Since servers and air conditioners represent the top users of electrical power in the data centre, this research sets out to explore methods from each subsystem of the data centre as part of an overall energy efficient solution. In this paper, we investigate the current trends of Green IT awareness and how the deployment of small environmental sensors and Site Infrastructure equipment optimization techniques which can offer a solution to a global issue by reducing carbon emissions.
    Sensors 12/2012; 12(5):6610-28. DOI:10.3390/s120506610 · 2.25 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Network virtualization is a promising solution that can prevent network ossification by allowing multiple heterogeneous virtual networks (VNs) to cohabit on a shared substrate network. It provides flexibility and promotes diversity. A key issue that ...
    Journal of Network and Systems Management 12/2012; 20(4). DOI:10.1007/s10922-012-9254-0 · 0.80 Impact Factor
  • Source
    John Panneerselvam · Stelios Sotiriadis · Nik Bessis · Nick Antonopoulos ·
    [Show abstract] [Hide abstract]
    ABSTRACT: Cloud computing is making a new revolution in the computing world. Elasticity is one of the key features of the cloud which makes cloud computing further fashionable. Cloud is still in its infant stages and it faces a wide range of security threats. The security downtime of cloud is relatively slowing down the spreading up of cloud computing. This paper focuses on the security threats faced by the weblets during their migration between the cloud and the mobile devices. Weblet migration is an important task during the implementation of elasticity in the cloud. As a primary step, a secure communication channel is designed for the weblet migration by deploying secure shell protocol. Herein, the vulnerabilities in some authentication mechanisms are highlighted and a better way of authenticating the weblets with SFTP (Secure File Transfer Protocol) is suggested. Finally, managing the traffic effectively in the designed channel with the aid of back pressure technique is also covered in this paper.
  • James Hardy · Lu Liu · Nick Antonopoulos · Weining Liu · Lei Cui · Jianxin Li ·
    [Show abstract] [Hide abstract]
    ABSTRACT: Virtualisation is a prevalent technology in current computing. Among the many aspects of virtualisation, it can be employed to reduce hardware costs by server consolidation, implement "green computing" by reducing power consumption and as an underpinning process for cloud computing enabling the creation of a range of virtual networks and virtual supercomputers. This paper presents performance measurements for a cloning system known as iVIC that has been developed in Beihang University, China. In an extension to earlier work, it focuses on the factors the limit the number of clones that can be successfully started. IVIC creates clusters of virtual computers that can communicate with each other through virtual switch mechanisms. The virtual switches can also allow communication between the clone environment and the physical world. Testing has been undertaken to identify the limiting factors for creating and starting numbers of clone machines, measure the power consumption of the physical system and the computational performance capability of the clones.
    04/2012; DOI:10.1109/ISORC.2012.14