Nick Antonopoulos

University of Derby, Derby, ENG, United Kingdom

Are you Nick Antonopoulos?

Claim your profile

Publications (80)15.62 Total impact

  • Georgios Exarchakos, Nick Antonopoulos
    [show abstract] [hide abstract]
    ABSTRACT: Highly dynamic overlay networks have a native ability to adapt their topology through rewiring to resource location and migration. However, this characteristic is not fully exploited in distributed resource discovery algorithms of nomadic resources. Recent and emergent computing paradigms (e.g. agile, nomadic, cloud, peer-to-peer computing) increasingly assume highly intermittent and nomadic resources shared over large-scale overlays. This work presents a discovery mechanism, Stalkers (and its three versions—Floodstalkers, Firestalkers, kk-Stalkers), that is able to cooperatively extract implicit knowledge embedded within the network topology and quickly adapt to any changes of resource locations. Stalkers aims at tracing resource migrations by only following links created by recent requestors. This characteristic allows search paths to bypass highly congested nodes, use collective knowledge to locate resources, and quickly respond to dynamic environments. Numerous experiments have shown higher success rate and stable performance compared to other related blind search mechanisms. More specifically, in fast changing topologies, the Firestalkers version exhibits good success rate, and low latency and cost in messages compared to other mechanisms.
    Future Generation Computer Systems 08/2013; 29(6):1473–1484. · 1.86 Impact Factor
  • H. Al-Aqrabi, Lu Liu, R. Hill, ZhiJun Ding, N. Antonopoulos
    [show abstract] [hide abstract]
    ABSTRACT: Business intelligence (BI) is a critical software system employed by the higher management of organizations for presenting business performance reports through Online Analytical Processing (OLAP) functionalities. BI faces sophisticated security issues given its strategic importance for higher management of business entities. Scholars have emphasized on enhanced session, presentation and application layer security in BI, in addition to the usual network and transport layer security controls. This is because an unauthorized user can gain access to highly sensitive consolidated business information in a BI system. To protect a BI environment, a number of controls are needed at the level of database objects, application files, and the underlying servers. In a cloud environment, the controls will be needed at all the components employed in the service-oriented architecture for hosting BI on the cloud. Hence, a BI environment (whether self-hosted or cloud-hosted) is expected to face significant security overheads. In this context, two models for securing BI on a cloud have been simulated in this paper. The first model is based on securing BI using a Unified Threat Management (UTM) cloud and the second model is based on distributed security controls embedded within the BI server arrays deployed throughout the Cloud. The simulation results revealed that the UTM model is expected to cause more overheads and bottlenecks per OLAP user than the distributed security model. However, the distributed security model is expected to pose administrative control effectiveness challenges than the UTM model. Based on the simulation results, it is recommended that BI security model on a Cloud should comprise of network, transport, session and presentation layers of security controls through UTM, and application layer security through the distributed security components. A mixed environment of both the models will ensure technical soundness of security controls, better security processes, - learly defined roles and accountabilities, and effectiveness of controls.
    Service Oriented System Engineering (SOSE), 2013 IEEE 7th International Symposium on; 01/2013
  • S. Sotiriadis, N. Bessis, P. Kuonen, N. Antonopoulos
    [show abstract] [hide abstract]
    ABSTRACT: This work covers the inter-cloud meta-scheduling system that encompasses the essential components of the interoperable cloud setting for wide service dissemination. The study herein illustrates a set of distributed and decentralized operations by highlighting meta-computing characteristics. This is achieved by using meta-brokers that determine a middle-standing component for orchestrating the decision making process in order to select the most appropriate datacenter resource among collaborated clouds. The selection is based on heuristic performance criteria (e.g. the service execution time, latency, energy efficiency etc.). Our solution is more advanced when compared to conventional centralized schemes, as it offers robust real-time scalable, elastic and flexible service scheduling in a fully decentralized and dynamic manner. Similarly, issues related with bottleneck on multiple service requests, heterogeneity, information exposition and consideration of variation of workloads are of prime focus. In view of that, the whole process is based upon random service requests from users that are clients of a sub-cloud of an inter-cloud datacenter and access is done via a meta-broker. The inter-cloud facility distributes the request for service by enclosing each personalized service into a host virtual machine. The study presents a detailed discussion of the algorithmic model for demonstrating the whole service dissemination, allocation, execution and monitoring process along with the preliminary implementation and configuration on a proposed SimIC simulation framework.
    Advanced Information Networking and Applications (AINA), 2013 IEEE 27th International Conference on; 01/2013
  • S. Sotiriadis, N. Bessis, N. Antonopoulos, A. Anjum
    [show abstract] [hide abstract]
    ABSTRACT: 'Simulating the Inter-Cloud' (SimIC) is a discrete event simulation toolkit based on the process oriented simulation package of SimJava. The SimIC aims of replicating an inter-cloud facility wherein multiple clouds collaborate with each other for distributing service requests with regards to the desired simulation setup. The package encompasses the fundamental entities of the inter-cloud meta-scheduling algorithm such as users, meta-brokers, local-brokers, datacenters, hosts, hyper visors and virtual machines (VMs). Additionally, resource discovery and scheduling policies together with VMs allocation, re-scheduling and VM migration strategies are included as well. Using the SimIC a modeler can design a fully dynamic inter-cloud setting wherein collaboration is founded on meta-scheduling inspired characteristics of distributed resource managers that exchange user requirements as driven events in real-time simulations. The SimIC aims of achieving interoperability, flexibility and service elasticity while at the same time introducing the notion of heterogeneity of multiple clouds' configurations. In addition it accepts an optimization of a variety of selected performance criteria for a diversity of entities. The crucial factor of dynamics consideration has implemented by allowing reactive orchestration based on current workload of already executed heterogeneous user specifications. These are in the form of text files that the modeler can load in the toolkit and occurs in real-time at different simulation intervals. Finally, a unique request is scheduled for execution to an internal cloud datacenter host VM that is capable of performing the service contract. This is formally designed in Service Level Agreements (SLAs) based upon user profiling.
    Advanced Information Networking and Applications (AINA), 2013 IEEE 27th International Conference on; 01/2013
  • Kan Zhang, Nick Antonopoulos
    [show abstract] [hide abstract]
    ABSTRACT: Peer-to-Peer (P2P) networking is an alternative to the cloud computing for relatively more informal trade. One of the major obstacles to its development is the free riding problem, which significantly degrades the scalability, fault tolerance and content availability of the systems. Bartering exchange ring based incentive mechanism is one of the most common solutions to this problem. It organizes the users with asymmetric interests in the bartering exchange rings, enforcing the users to contribute while consuming. However the existing bartering exchange ring formation approaches have inefficient and static limitations. This paper proposes a novel cluster based incentive mechanism (CBIM) that enables dynamic ring formation by modifying the Query Protocol of underlying P2P systems. It also uses a reputation system to alleviate malicious behaviors. The users identify free riders by fully utilizing their local transaction information. The identified free riders are blacklisted and thus isolated. The simulation results indicate that by applying the CBIM, the request success rate can be noticeably increased since the rational nodes are forced to become more cooperative and the free riding behaviors can be identified to a certain extent.
    Future Generation Computer Systems - FGCS. 01/2013;
  • [show abstract] [hide abstract]
    ABSTRACT: Virtualization offers flexible and rapid provisioning of the physical machines. Latest years isolation and migration of virtual machines have improved resource utilization as well as resource management techniques. This paper is focusing on the process of migration and leveraging virtual machine handling with this use of automation. We outline current trends and issues with regard to datacentre that apply policy-based automation. The automated solution will improve the efficiency of operations in the datacentre focusing mainly on improving resource handling as well as reducing power consumption. This could be proven particular useful in disaster management recovery wherein power supplies will need to be optimized. The focus of this work is on an approach for virtual machine and power management. We present a discussion to show such a functionality by using migration in order to reduce server sprawl, minimize power consumption and load balance across physical machines.
    P2P, Parallel, Grid, Cloud and Internet Computing (3PGCIC), 2013 Eighth International Conference on; 01/2013
  • S. Sotiriadis, N. Bessis, N. Antonopoulos
    [show abstract] [hide abstract]
    ABSTRACT: On recent years, much effort has been put in analyzing the performance of large-scale distributed systems like grids, clouds and inter-clouds with respect to a diversity of resources and user requirements. A common way to achieve this is by using simulation frameworks for evaluating novel models prior to the development of solutions in highly cost settings. In this work we focus on the SimIC simulation toolkit as an innovative discrete event driven solution to mimic the inter-cloud service formation, dissemination, and execution phases, processes that are bundled in the inter-cloud meta-scheduling (ICMS) framework. Our work has meta-inspired characteristics as we determine the inter-cloud as a decentralized and dynamic computing environment where meta-brokers actas distributed management nodes for dynamic and real-time decision making in an identical manner. To this extend, we study the performance of service distributions among clouds based on a variety of metrics (e.g. execution time and turnaround) when different heterogeneous inter-cloud topologies are taking place. We also explore the behavior of the ICMS for different user submissions in terms of their computational requirements. The aim is to produce the results for a benchmark analysis of clouds in order to serve future research efforts on cloud and inter-cloud performance evaluation as benchmarks. The results are diverse in terms of different performance metrics. Especially for the ICMS, an increased performance tendency is observed when the system scales to massive user requests. This implies the improvement on scalability and service elasticity figures.
    Advanced Information Networking and Applications Workshops (WAINA), 2013 27th International Conference on; 01/2013
  • Source
    [show abstract] [hide abstract]
    ABSTRACT: Cloud computing is making a new revolution in the computing world. Elasticity is one of the key features of the cloud which makes cloud computing further fashionable. Cloud is still in its infant stages and it faces a wide range of security threats. The security downtime of cloud is relatively slowing down the spreading up of cloud computing. This paper focuses on the security threats faced by the weblets during their migration between the cloud and the mobile devices. Weblet migration is an important task during the implementation of elasticity in the cloud. As a primary step, a secure communication channel is designed for the weblet migration by deploying secure shell protocol. Herein, the vulnerabilities in some authentication mechanisms are highlighted and a better way of authenticating the weblets with SFTP (Secure File Transfer Protocol) is suggested. Finally, managing the traffic effectively in the designed channel with the aid of back pressure technique is also covered in this paper.
  • H. Al-Aqrabi, Lu Liu, R. Hill, N. Antonopoulos
    [show abstract] [hide abstract]
    ABSTRACT: Cloud computing is gradually gaining popularity among businesses due to its distinct advantages over self-hosted IT infrastructures. The software-as-a-service providers are serving as the primary interfacing to the business users community. However, the strategies and methods for hosting mission critical business intelligence (BI) applications on cloud is still being researched. BI is a highly resource intensive system requiring large scale parallel processing and significant storage capacities to host the data warehouses. OLAP (online analytical processing) is the user-end interface of BI that is designed to present multi-dimensional graphical reports to the end users. OLAP employs data cubes formed as a result of multidimensional queries run on an array of data warehouses. In self-hosted environments it was feared that BI will eventually face a resource crunch situation because it won't be feasible for companies to keep on adding resources to host the never ending expansion of data warehouses and the OLAP demands on the underlying networking. Cloud computing has instigated a new hope for future prospects of BI. But how will BI be implemented on cloud and how will the traffic and demand profile look like? This research has attempted to answer these key questions in this paper pertaining to taking BI to the cloud. The cloud hosting of BI has been demonstrated with the help of a simulation on OPNET comprising a cloud model with multiple OLAP application servers applying parallel query loads on an array of servers hosting relational databases. The simulation results have reflected that true and extensible parallel processing of database servers on the cloud can efficiently process OLAP application demands on cloud computing. Hence, the BI designer needs to plan for a highly partitioned database running on massively parallel database servers in which, each server hosts at least one partition of the underlying database serving the OLAP demands.
    High Performance Computing and Communication & 2012 IEEE 9th International Conference on Embedded Software and Systems (HPCC-ICESS), 2012 IEEE 14th International Conference on; 01/2012
  • Source
    [show abstract] [hide abstract]
    ABSTRACT: Cloud computing has dramatically reshaped the whole IT industry in recent years. With the transition from IPv4 to IPv6, services running in Cloud computing will face problems associated with IPv6 addressing: the notation is too long (39 bytes), there are too many variants of a single IPv6 address and a potential conflict may exist with conventional http_URL notation caused by the use of the colon (:). This paper proposes a new scheme to represent an IPv6 address with a shorter, more compact notation (27 bytes), without variants or conflicts with http_URL. The proposal is known as dot-base62x as it is an IPv6 address with Base62x and uses the well-known period (or dot) as a group delimiter instead of the colon. The relative merits and demerits of other works that predate this paper have been reviewed and critically evaluated. Cloud computing, as a continuously emerging mainstream of network-based applications, is likely to be a forerunner in the use of IPv6 as the base protocol. As a result, Cloud computing will benefit most from the new, compact and user friendly textual representation of IPv6 address proposed by this paper.
    cloud computing. 01/2012; 1(3).
  • [show abstract] [hide abstract]
    ABSTRACT: The key security challenges and solutions on the cloud have been investigated in this paper with the help of literature reviews and an experimental model created on OPNET that is simulated to produce useful statistics to establish the approach that the cloud computing service providers should take to provide optimal security and compliance. The literatures recommend the concept of Security-as-a-Service using unified threat management (UTM) for ensuring secured services on the cloud. Through the simulation results, this paper has demonstrated that UTM may not be a feasible approach to security implementation as it may become a bottleneck for the application clouds. The fundamental benefits of cloud computing (resources on demand and high elasticity) may be diluted if UTMs do not scale up effectively as per the traffic loads on the application clouds. Moreover, it is not feasible for application clouds to absorb the performance degradation for security and compliance because UTM will not be a total solution for security and compliance. Applications also share the vulnerabilities just like the systems, which will be out of UTM cloud's control.
    01/2012;
  • [show abstract] [hide abstract]
    ABSTRACT: Virtualisation is a prevalent technology in current computing. Among the many aspects of virtualisation, it can be employed to reduce hardware costs by server consolidation, implement "green computing" by reducing power consumption and as an underpinning process for cloud computing enabling the creation of a range of virtual networks and virtual supercomputers. This paper presents performance measurements for a cloning system known as iVIC that has been developed in Beihang University, China. In an extension to earlier work, it focuses on the factors the limit the number of clones that can be successfully started. IVIC creates clusters of virtual computers that can communicate with each other through virtual switch mechanisms. The virtual switches can also allow communication between the clone environment and the physical world. Testing has been undertaken to identify the limiting factors for creating and starting numbers of clone machines, measure the power consumption of the physical system and the computational performance capability of the clones.
    01/2012;
  • Source
    Lu Liu, Osama Masfary, Nick Antonopoulos
    [show abstract] [hide abstract]
    ABSTRACT: The increasing trends of electrical consumption within data centres are a growing concern for business owners as they are quickly becoming a large fraction of the total cost of ownership. Ultra small sensors could be deployed within a data centre to monitor environmental factors to lower the electrical costs and improve the energy efficiency. Since servers and air conditioners represent the top users of electrical power in the data centre, this research sets out to explore methods from each subsystem of the data centre as part of an overall energy efficient solution. In this paper, we investigate the current trends of Green IT awareness and how the deployment of small environmental sensors and Site Infrastructure equipment optimization techniques which can offer a solution to a global issue by reducing carbon emissions.
    Sensors 01/2012; 12(5):6610-28. · 1.95 Impact Factor
  • Georgios Exarchakos, Nick Antonopoulos
    [show abstract] [hide abstract]
    ABSTRACT: As a plethora of various distributed applications emerge, new computing platforms are necessary to support their extra and sometimes evolving requirements. This research derives its motive from deficiencies of real networked applications deployed on platforms unable to fully support their characteristics and proposes a network architecture to address that issue. Hoverlay is a system that enables logical movement of nodes from one network to another aiming to relieve requesting nodes, which experience high workload. Node migration and dynamic server overlay differentiate Hoverlay from Condor-based architectures, which exhibit more static links between managers and nodes. In this paper, we present a number of important extensions to the basic Hoverlay architecture, which collectively enhance the degree of control owners have over their nodes and the overall level of cooperation among servers. Furthermore, we carried out extensive simulations, which proved that Hoverlay outperforms Condor and Flock of Condors in both success rate and average successful query path length at a negligible increase in messages.
    Peer-to-Peer Networking and Applications 01/2012; 5:58-73. · 0.37 Impact Factor
  • Source
    [show abstract] [hide abstract]
    ABSTRACT: Ubiquitous computing environments are characterised by smart, interconnected artefacts embedded in our physical world that provide useful services to human inhabitants unobtrusively. Mobile devices are becoming the primary tools for human interaction with these embedded artefacts and for the utilisation of services available in smart computing environments such as clouds. Advancements in the capabilities of mobile devices allow a number of user and environment related context consumers to be hosted on these devices. Without a coordinating component, these context consumers and providers are a potential burden on device resources; specifically the effect of uncoordinated computation and communication with cloud-enabled services can negatively impact battery life. Therefore energy conservation is a major concern in realising the collaboration and utilisation of mobile device based context-aware applications and cloud based services. This paper presents the concept of a context-brokering component to aid in coordination and communication of context information between mobile devices and services deployed in a cloud infrastructure. A prototype context broker is experimentally analysed for effects on energy conservation when accessing and coordinating with cloud services on a smart device, with results signifying reduction in energy consumption.
    Journal of Ambient Intelligence and Humanized Computing 01/2012; abs/1202.5519.
  • [show abstract] [hide abstract]
    ABSTRACT: Over the last decades, the cooperation amongst different resources that belong to various environments has been arisen as one of the most important research topic. This is mainly because of the different requirements, in terms of jobs' preferences that have been posed by different resource providers as the most efficient way to coordinate large scale settings like grids and clouds. However, the commonality of the complexity of the architectures (e.g. in heterogeneity issues) and the targets that each paradigm aims to achieve (e.g. flexibility) remains the same. This is to efficiently orchestrate resources and user demands in a distributed computing fashion by bridging the gap among local and remote participants. At a first glance, this is directly related with the scheduling concept, which is one of the most important issues for designing a cooperative resource management system, especially in large scale settings. In addition, the term meta-computing, hence meta-scheduling, offers additional functionalities in the area of interoperable resource management because of its great proficiency to handle sudden variations and dynamic situations in user demands by bridging the gap among local and remote participants. This work presents a review on scheduling in high performance, grid and cloud computing infrastructures. We conclude by analysing most important characteristics towards inter-cooperated infrastructures.
    01/2012;
  • A. Hinds, S. Sotiriadis, N. Bessis, N. Antonopoulos
    [show abstract] [hide abstract]
    ABSTRACT: This paper describes the need for a unified simulation framework which defines the simulation tools and configuration settings for researchers to perform comparative simulations and test the performance of security tools for the AODV MANET routing protocol. The key objectives of the proposed framework are to provide an unbiased, repeatable simulation environment which collects all important performance metrics and has a configuration optimized for the AODV protocols performance. Any security tool can then be simulated using the framework, showing the performance impact of the tool against the AODV baseline results or other security tools. The framework will control the network performance metrics and mobility models used in the simulation. It is anticipated that this framework will enable researchers to easily repeat experiments and directly compare results without configuration settings influencing the results.
    Emerging Intelligent Data and Web Technologies (EIDWT), 2012 Third International Conference on; 01/2012
  • A. Jones, Lu Liu, N. Antonopoulos, Weining Liu
    [show abstract] [hide abstract]
    ABSTRACT: In this paper three current massively multiplayer online game peer to peer protocols are simulated and analysed. The results of the simulations suggest that improvement is still needed in order to lower the bandwidth usage of the protocols. Areas for improvement for the protocols are suggested and their business viability is discussed. It is also discovered that a complete solution that is suitable for the market is not yet available. Ideas for future research into peer to peer protocols and network middleware are put forward.
    Trust, Security and Privacy in Computing and Communications (TrustCom), 2012 IEEE 11th International Conference on; 01/2012
  • S. Sotiriadis, N. Bessis, F. Xhafa, N. Antonopoulos
    [show abstract] [hide abstract]
    ABSTRACT: Cloud computing provides an efficient and flexible means for various services to meet the diverse and escalating needs of IT end-users. It offers novel functionalities including the utilization of remote services in addition to the virtualization technology. The latter feature offers an efficient method to harness the cloud power by fragmenting a cloud physical host in small manageable virtual portions. As a norm, the virtualized parts are generated by the cloud provider administrator through the hyper visor software based on a generic need for various services. However, several obstacles arise from this generalized and static approach. In this paper, we study and propose a model for instantiating dynamically virtual machines in relation to the current job characteristics. Following, we simulate a virtualized cloud environment in order to evaluate the model's dynamic-ness by measuring the correlation of virtual machines to hosts for certain job variations. This will allow us to compute the expected average execution time of various virtual machines instantiations per job length.
    Complex, Intelligent and Software Intensive Systems (CISIS), 2012 Sixth International Conference on; 01/2012
  • Source
    [show abstract] [hide abstract]
    ABSTRACT: During the past few years much effort has been put into developing interoperable grid models suitable of defining a decentralized control setting. Such environments may define new rules and actions to internal Virtual Organisation (VO) members and therefore posing new challenges towards to an extended cooperation model of grids. Particularly, VO members' knowledge may be expressed in the form of intelligent agents thus providing a more autonomous solution of communicating members. Herein we present a mobile agent middleware for Grid interoperable infrastructures. Facing the enlarging scale of Grid, the proposed middleware aims to extend the knowledge of a specific neighbouring of Grid members (VO) in relation to the addresses and the physical resources of known and unknown nodes which may join the Grid VO. The internal data are structured in a rational sequence and stored within a public profile of each member called metadata snapshot profile. The middleware is designed by employing the Java Agent Development (JADE) framework in which mobile agents are travelling throughout the domain and by collecting and updating internal data they extend the size of the VO. The interoperable standard is achieved by using the Critical Friends Community (CFC) model, as the mean to fulfil the inter-cooperation model.
    25th IEEE International Conference on Advanced Information Networking and Applications Workshops, WAINA 2011, Biopolis, Singapore, March 22-25, 2011; 01/2011

Publication Stats

184 Citations
15.62 Total Impact Points

Institutions

  • 2010–2013
    • University of Derby
      • • Department of Computing
      • • School of Computing & Maths
      Derby, ENG, United Kingdom
  • 2001–2012
    • University of Surrey
      • Department of Computing
      Guilford, England, United Kingdom
  • 2008
    • University of Leeds
      • School of Computing
      Leeds, England, United Kingdom