ArticlePDF Available

Abstract and Figures

In recent years, an important increase in the amount and impact of Distributed Denial of Service (DDoS) threats has been reported by the different information security organizations. They typically target the depletion of the computational resources of the victims, hence drastically harming their operational capabilities. Inspired by these methods, Economic Denial of Sustainability (EDoS) attacks pose a similar motivation, but adapted to Cloud computing environments, where the denial is achieved by damaging the economy of both suppliers and customers. Therefore, the most common EDoS approach is making the offered services unsustainable by exploiting their auto-scaling algorithms. In order to contribute to their mitigation, this paper introduces a novel EDoS detection method based on the study of entropy variations related with metrics taken into account when deciding auto-scaling actuations. Through the prediction and definition of adaptive thresholds, unexpected behaviors capable of fraudulently demand new resource hiring are distinguished. With the purpose of demonstrate the effectiveness of the proposal, an experimental scenario adapted to the singularities of the EDoS threats and the assumptions driven by their original definition is described in depth. The preliminary results proved high accuracy.
Content may be subject to copyright.
entropy
Article
Entropy-Based Economic Denial of
Sustainability Detection
Marco Antonio Sotelo Monge , Jorge Maestre Vidal and Luis Javier García Villalba *,†
Group of Analysis, Security and Systems (GASS), Department of Software Engineering and Artificial
Intelligence (DISIA), Faculty of Computer Science and Engineering, Office 431,
Universidad Complutense de Madrid (UCM)
, Calle Profesor José García Santesmases , 9, Ciudad Universitaria,
28040 Madrid, Spain; masotelo@ucm.es (M.A.S.M.); jmaestre@ucm.es (J.M.V.)
*Correspondence: javiergv@fdi.ucm.es; Tel.: +34-91-394-7638
These authors contributed equally to this work.
Received: 11 November 2017; Accepted: 27 November 2017; Published: 29 November 2017
Abstract:
In recent years, an important increase in the amount and impact of Distributed
Denial of Service (DDoS) threats has been reported by the different information security
organizations. They typically target the depletion of the computational resources of the victims,
hence drastically harming their operational capabilities. Inspired by these methods, Economic
Denial of Sustainability (EDoS) attacks pose a similar motivation, but adapted to Cloud computing
environments, where the denial is achieved by damaging the economy of both suppliers and
customers. Therefore, the most common EDoS approach is making the offered services unsustainable
by exploiting their auto-scaling algorithms. In order to contribute to their mitigation, this paper
introduces a novel EDoS detection method based on the study of entropy variations related with
metrics taken into account when deciding auto-scaling actuations. Through the prediction and
definition of adaptive thresholds, unexpected behaviors capable of fraudulently demand new
resource hiring are distinguished. With the purpose of demonstrate the effectiveness of the proposal,
an experimental scenario adapted to the singularities of the EDoS threats and the assumptions driven
by their original definition is described in depth. The preliminary results proved high accuracy.
Keywords:
Cloud Computing; Denial of Service; Economic Denial of Sustainability; Entropy;
Intrusion Detection; Information Security
1. Introduction
The main goal of Denial of Service attacks (DoS) is to deplete the resources of the victim
systems with the purpose of their disabling. The abbreviation DoS typically refers to threats with
a single source; when they are originated in multiple sources, the expression Distributed Denial
of Service attacks (DDoS) is applied. In the last decades these threats have grown, become more
sophisticated and have acquired a greater intrusive capacity, which magnifies their impact and
hinder their mitigation. The different information security agencies have warned about this problem.
For example, the European Union Agency for Network and Information Security (ENISA) registered
a 30% increase of DDoS threads in the last year [
1
]. Given the magnitude of impact of the threads
registered at Autumn 2016 [
2
], both European Commission (EC) [
3
] and US Government [
4
] announced
an important reinforcement in their measures against these attacks. According to the European Police
(Europol), their harmful capabilities are propitiated by different circumstances: The rapid proliferation
of botnets, the emergence of novel vulnerabilities and amplifying elements, a greater offer of malicious
products as Crimeware-as-a-Service (CaaS) in the black market, the massive popularization of certain
technologies (e.g., mobile devices, Internet of Things (IoT), etc.), and the ignorance of users concerning
good practices related with data protection and information security [5].
Entropy 2017,19, 649; doi:10.3390/e19120649 www.mdpi.com/journal/entropy
Entropy 2017,19, 649 2 of 16
The most common DDoS approach is based on flooding, which modus operandi is to inject several
requests in order to saturate the victim processing capabilities. As highlighted in [
6
], this is achieved
by the constant and continuous generation of large volumes of requests (i.e., high rate flooding), or by
seasonal injection of less noisy numbers of requests (i.e., low rate flooding). Consequently, the research
community assumed these behaviors and over the last decade has proposed solutions that facilitate
their prevention, detection, mitigation and identification of sources, some of them discussed in-depth
at [
7
]. However, the emergence of new monitoring environments, in particular those that adapt
the technologies that take part of the backbone of fifth generation networks, has led to a variation
of these threats that instead of compromising computing resources, has focused on damaging the
economic sustainability of the services they support [
8
]. They are well-known as Economic Denial of
Sustainability attacks (EDoS), and they are the object of study of the research described throughout the
rest of the paper. With the purpose of cooperate with the research community towards their mitigation,
the following main contributions are accomplished:
An in-depth review of the EDoS threats and the efforts made by the research community for their
detection, mitigation and identification of sources.
A multi-layered architecture for EDoS attack detection, which describes the management of the
acquired information from its monitoring to the notification of possible threats.
A novel entropy-based EDoS detection approach, which assuming its original definition, allows to
discover unexpected behavior on local-level metrics related with the auto-scaling capabilities of
the victim system.
An evaluation methodology adapted to the singularities of the EDoS threats and the assumptions
driven by their original definition.
Comprehensive experimental studies that validate the proposed detection strategy, in this way
motivating its adaptation to future use cases.
The paper is organized into five sections, being the first of them the present introduction. Section 2
studies the main features of the EDoS attacks and their countermeasures. Section 3introduces a
novel EDoS detection system based on entropy variations analysis. Section 4describes the performed
experimentation. Section 5describes and discusses the obtained results. Finally, Section 6presents the
conclusions and future work.
2. Background
This section describes the main features of the Economic Denial of Sustainability threats and some
of the most relevant countermeasures in the bibliography.
2.1. Economic Denial of Sustainability Attacks
Hoff coined the term Economic Denial of Sustainability attacks (EDoS) in 2008 [
9
,
10
] and Cohen
extended its definition [
11
], which is currently adopted by the research community. EDoS attacks are
usually directed against Cloud Computing infrastructures, which play an essential role in the emergent
communication technologies. Because of this, Singh et al. formally defined EDoS attacks as “threats
which target is to make the costing model unsustainable and therefore making it no longer viable for a
company to affordability use or pay for their Cloud-based infrastructure” [
12
]. EDoS are also tagged
in the bibliography as Reduction of Quality (RoQ) threats [
13
], or Fraudulent Resource Consumption
attacks (FRC) [
14
]. These intrusions take advantage of the “pay-as-you-go” accounting model offered
by most of the Cloud Computing providers and their auto-scaling
services [15]
. Their modus operandi
slightly varies depending on the providers and the Cloud solutions they offer (e.g., OpenStack,
Microsoft Azure, Amazon EC2, etc.) [
13
], as well as the scaling policies they implement (discrete,
adaptive, etc.). However, EDoS tends to display a common pattern: The attacker injects requests that
must be processed at server-side. They pose an important workload effect, which may be caused by
different actions, among them requesting large files or queries [
16
], HTTP-requests on XML
files [17]
,
Entropy 2017,19, 649 3 of 16
or exploiting alternative Application layer vulnerabilities [
18
20
]. When the flooding of requests
exceeds the computational capabilities of the hired services, the auto-scaling processes trigger the need
of contract additional resources, which increases the bill that the client must pay. Sonami et al. [
14
]
studied the consequences of this increasing of costs, which have distinct impacts depending on the
side. For example, in addition to the impact on the offered Quality of Service, the economic losses
may become unsustainable for the clients, and consequently they probably will try to find a more
profitable provider. This obviously also affects the supplier, which loses reputation, and hence money
at long-term. The attack also impairs other services and network layers, mostly because the impact of
deploying additional resources. This involves, among others, physical infrastructure, Network Function
Virtualization (NFV) or multi-tenancy, which may compromise additional network
resources [21]
.
For example, in [
13
], a low-rate flooding variant of EDoS is introduced with the purpose of maximize
the collateral damage (i.e., the consequences of auto-scaling) and make its detection more difficult. Such
publication reviews its consequences at different Cloud Computing architecture levels.
2.2. Countermeasures
In general terms, the extensive literature related to the defense against conventional DDoS threats
lacks publications effective against EDoS attacks. This is because EDoS focus on make Cloud resources
economically unsustainable instead of their depletion. This often occurs by far less noisy attacks, and
with a greater resemblance to the behavior of the legitimate user [
16
]. Because of this, EDoS detection is
driven by metrics related with resource consumption at server-side, while conventional DDoS detections
usually analyzes network traffic metrics at packet and flow level [
22
]. Several specific approaches
against EDoS attacks are collected and discussed in [
8
,
16
,
23
]. Some of them aim on their detection,
which typically distinguishes two methods. The first of them analyzes network traffic metrics, as is
the case of those that describe the web browsing behaviors [
24
], time spent at web pages [
25
] or packet
header attributes, for example their TTL [
26
]. They are easy to implement and efficient, but lie on
Application layer or networking protocol features more related to DDoS than EDoS; hence their accuracy
is greatly restricted to each use case [
27
]. On the other hand, the second approach is based on modeling
the economical sustainability of the services looking for suspicious discordances [
28
]. This method was
significantly less considered by the research community, mainly because of its specificity; in particular
it entails a greater difference with conventional DDoS detection strategies and demand more complex
processes at server-side. However, it is independent of the exploited network layer and provides a more
comprehensive understanding of the impact of the requests to the protected services, the latter usually
leading to greater accuracy.
Publications based on prevention and mitigation of EDoS threats focus on hampering their
execution and minimizing their impact. The most complex prevention solutions mathematically
model the resources required by the protected services, and anticipate the consumption of future
requests, which usually adopts game theory or queuing methods [
29
]. They allow anticipating
harmful situations facilitating proactive responses, but must be complemented by reactive solutions.
Major efforts towards mitigation EDoS threats focus on deploying access control mechanisms, as is the
case of Crypto-puzzles [
30
32
], Graphical Turing tests [
26
,
33
] or reputation systems [
34
,
35
]. They are
effective, but as highlighted in [
23
], resolving hard tests or deploying complex reputation schemes
consume additional resources at both client and server sides, and significantly affect the Quality of
Experience (QoE) on the protected environment.
Once the threats are detected and mitigated, the final step is to identify their source. The bibliography
lacks publications that specifically address this problem, excluding certain exceptions as [
24
]. They usually
model server usage behaviors based on Application layer metrics, among them web session duration,
number of HTTP requests, or their impact on the protected environment. More generalist solutions are
inherited from the advances on the conventional DDoS attack source identification. They mainly include
packet traceback techniques, some of them being collected in [
36
]. For example, in [
37
] a novel approach
that bypasses the deployment difficulties of the conventional IP traceback techniques by studying ICMP
Entropy 2017,19, 649 4 of 16
error messages is proposed. As reviewed in [
38
], the features of the network topology have an important
impact in the effectiveness of the source identification approaches, which tend to be problematic in
highly non-seasonal environments. Alternatively, traps as honeypots [
39
], or decoy virtual machines that
co-exist with those real in the same physical hosts [
40
] are deployed. They implement the aforementioned
methods, thus providing an additional level of security.
3. EDoS Attack Detection
With the purpose of establish the basis for defining an appropriate design methodology,
the peculiarities of the conventional Denial of Service attacks, the legitimate mass access to the
protected services (i.e., flash crowds), and their differences with the Denial of Sustainability threats
have been taken into account. They allowed to define the following assumptions and limitations
concerning the proposal described in the rest of this section:
As remarked by Hoff in the original definition of EDoS attacks [
9
], they pose threats that do not
aim on deny the service of the victim systems, but increase the economic cost of the services they
offer to make them unsustainable.
Hereinafter, Chris clarified that at network-level, EDoS threats resemble activities performed by
legitimate users [
10
]. This implies that the distribution of the different network metrics (number of
request, number of sessions, frequency, bandwidth computation, etc.) does not vary significantly
when these attacks are launched. This is because in order to ensure their effectiveness, they must
go unnoticed.
It is possible to identify EDoS attacks by analyzing performance metrics at local-level. Given that
at network-level there are no differences between EDoS and normal traffic, the requests performed
by these threats must involve a greater operational cost.
Requests performed by EDoS attacks have a similar quality to those from legitimate users (for
example, a similar success rate). However, attackers may exploit vulnerabilities (usually at
Application layer) to extend their impact [14].
DDoS attacks usually originate from a large number of clients, where each of them performs a
huge number of low-quality requests. On the other hand, EDoS attacks also come from many
sources, but each client performs an amount of request similar to that of legitimate users. Unlike in
flash crowds, EDoS attacks affect the predictability of the performance metrics related to the costs
resulting from attending the requests served by the victim [18].
Based on these premises, it is possible to assume that, by studying the predictability of
performance metrics at local-level (e.g., processing time, memory consumption, input and output
operations, CPU consumption, etc.), it is possible successfully identify EDoS attacks. This is taken
into account in the following subsections, where the introduced detection strategy is described.
The proposal has the architecture illustrated in Figure 1. Therefore, it must perform three main
tasks: (1) monitoring and aggregation; (2) novelty detection and (3) decision-making. They are
described below.
Entropy 2017,19, 649 5 of 16
Clients
Server
Requests
Host-side
Monitoring
Aggregation
Prediction
Thresholding
Novelty Detection
Decision-Making
Response
Local metr ics
Aggregat ed metrics
Performance
Outliers
Alerts
Figure 1. Architecture for EDoS attack detection.
3.1. Monitoring and Aggregation
At the monitoring stage, the factual knowledge necessary to deduce the nature of the requests
to be analyzed is collected. Therefore, the detection system monitors local metrics related to the
operational cost of responding the received request. Assuming that in order to success, EDoS attacks
attempt to trigger the auto-scaling mechanisms of the victim-side, the metrics that determine these
actions acquire special relevance. Note that they are widely studied in the bibliography, which vary
according to the management services. Examples of well-known local-level metrics are: CPU utilization,
warming time, response time, number of I/O requests, bandwidth or memory
consumption [13,14]
.
Because of its relevance in the recent Cloud computing commercial solutions (e.g., Google Cloud,
Amazon EC2, etc.) the performed experimentation considered the percentage CPU usage of the victim
system. On the other hand, it is important to borne in mind that the analysis of the predictability
degree of events has played an essential role in the defense against conventional DDoS threats.
Among the most used aggregated metrics, it is worth mentioning the classical entropy adaptation to
the information theory proposed by Shannon [
41
]. Note that in approaches like [
42
] it is demonstrated
its effectiveness when applied to DDoS detection, being a strong element in the discovery of flooding
threats. Recent publications such as [
16
,
27
,
28
] tried to adapt this paradigm to the EDoS problem.
However, most of them made the mistake of only considering information monitored at network-level,
hence ignoring part of the information that truly defines the auto-scaling policies. Because of this,
the Aggregation stage of the proposed method calculates the information entropy
H(X)
of the
{x1
,
x2
, . . . ,
xn}
instances of the qualitative variable
X
monitored per observation, as well as their
{p1
,
p2
,...,
pn}
probabilities. The proposed detection scheme defines
X
as “the response time (rate) to
Entropy 2017,19, 649 6 of 16
the different requests performed by each client”. Given that
X
describes discrete events, its entropy is
expressed as follows:
H(X) =
n
i=1
pilogapi(1)
where
logab
.
logbx=logax
.
H(X)
is normalized, hence being calculated when dividing the obtained
value by the maximum observable entropy
logbn
. When the maximum entropy is reached, all the
monitored clients made requests with the same CPU overload; on the contrary, if the registered entropy
is 0 then (1) a single customer carried out all the requests, or (2) there was no CPU consumption during
the observation period. The sequence of monitored entropies is studied as a time series H(X)N
t=0.
3.2. Novelty Detection
The next analytic step is to recognize the observations that significantly vary from normal
behaviors. This is a one-class classification problem where it is assumed that the normal data compiles
the previous
H(X)t=1
,...,
H(X)t=N1
observations and it is intended to deduce if
H(X)t=N
belongs
to the same activities. The bibliography provides a large variety of solutions to this problem [
43
].
However, because it was assumed that EDoS attacks could be identified by discovering discordances
at the predictability of local-level aggregated metrics [
18
], the proposed system implements a
forecasting approach.
3.2.1. Detection Criteria
In particular, the entropy for certain horizon
h
,
ˆ
H(X)t=N+h
, is predicted. Hence, letting the
following Euclidean distance:
dist(o,ˆ
o) = q(ˆ
H(X)t=N+h(X)t=N+h)2(2)
If
(X)t=N+h
differs from
ˆ
H(X)t=N+h
, so
dist(o
,
ˆ
o)>
0 an unexpected behavior is detected.
The significance of this anomaly is established by two adaptive thresholds: Upper Threshold
(Thsup)
and Lower Threshold (Thi n f ). A novelty was discovered if any of the following conditions is met:
dist(o,ˆ
o)>0and H(X)t=N+h>Thsu p
dist(o,ˆ
o)>0and H(X)t=N+h<Thin f
(3)
3.2.2. Prediction
The implemented prediction methodology adopted the Autoregressive Integrated Moving
Average
ARI M A(p
,
d
,
q)
paradigm [
44
], which defined by the following general-purpose
forecast model:
Yτ1a1Yτ1− · · · − ap0Yτp0=et+θ1et1+· · · +θqetq(4)
where
ai
are the parameters of the autoregressive part,
θi
are the parameters of the moving
average part and
et
is white noise. The adjustment of
p
,
d
,
q
may be the ARIMA model
equal to other classical forecasting models. For example simple random walk
(ARI M A(
1, 1, 0
))
,
AR(ARI M A(
1, 0, 0
))
,
MA(AR I MA(
0, 0, 1
))
, simple exponential smoothing
(ARI M A(
0, 1, 1
))
, double
exponential smoothing
(ARI M A(
0, 2, 2
))
, etc. Predictions
(ˆ
yt)
on ARIMA models are inferred by a
generalization of the autoregressive forecasting method expressed as follows:
ˆ
yt=µ+φ1Yτ1+φpYτpφ1et1− ·· · − φqetq(5)
and the calibration of the adjustment parameters
p
,
d
,
q
considered the Akaike Information
Criterion (AIC) as described in [45].
Entropy 2017,19, 649 7 of 16
3.2.3. Adaptive Thresholding
On the other hand, the adaptive thresholds define the Prediction Interval (PI) of the sensor, which
is deduced in the same way as it is usually described in the bibliography [
4
], hence assuming the
following expressions:
Thsup =H(X)t=N+h+Kqσ2var(dist(o,ˆ
o))
Thi n f =H(X)t=N+hKqσ2var(dist(o,ˆ
o))
(6)
and being
K
the confidence interval of the estimation (by default
Zα
2
). Note that despite linking its
value to the normal distribution, it was demonstrated that when time series does not approach such
distribution, the obtained error is unrepresentative [
46
]. Figure 2illustrates an example of novelty
detection. In the first 60 observations non
H(X)
exceeds the adaptive thresholds; but at observation 61
an EDoS attack was launch, and the inferred changes meet the conditions to be considered novel.
10 20 30 40 50 60 70
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
Entropy
t
Observation
Upper Threshold
Lower Threshold
Figure 2. Example of novelty detection.
3.3. Decision-Making and Response
According to the principles of anomaly-based intrusion detection compiled and discussed by
Chandola et al. [
47
], once assumed the appropriate premises, the identification of discordant behaviors
may be indicative of malicious activities. As stated at the beginning of this section, the introduced
EDoS detection system lies on the original definitions of C. Hoff and R. Cohen. Therefore, when a
local metric directly related with triggering auto-scaling capabilities on Cloud computing became
unpredictable, it is possible deduce that the protected environment is misused, hence jeopardized.
This occurs when
dist(o
,
ˆ
o)>
0 and (1)
H(X)t=N+h>Thsup
or (2)
H(X)t=N+h<Thi n f
. Because the
performed research focused only on detect the threats, its response is to notify the detected incident.
The report may trigger mitigation measures such as initiate more restrictive control access [
30
,
31
,
33
]
or deploy source identification capabilities [
24
] (which decision and development is out of scope).
Therefore, it entails a good complement to many of the proposals in the bibliography.
4. Experiments
The following sections describe the Cloud-based testbed and related architectural components
considered throughout the performed experimentation. They are depicted in Figure 3.
Entropy 2017,19, 649 8 of 16
VMs Openstack Deployment
RabbitMQ
Compute Node
Heat Services
(Auto-Scaling)
RabbitMQ
Controller Node
Client 1
Client 2
Client 3
Client n
Nova VM Inst ance
Ceilometer-nova -agent
Neutron Plugin Linux
Bridge Agent
Nova Comp ute
Ceilometer
Services
Neutron Service s
Nova Services
Traffic
Anomaly Alert
....
Mgmnt.
Network
Public
Network
Flask
Web
Service
Operating System
HTTP
Usage
Monitor
Entropy Modeler
(Novelty De tection)
HTTP GET
Figure 3. Cloud execution environment for experiments.
4.1. Execution Environment
The experimental cloud computing environment was built with Openstack [
48
], a well-known
open source cloud platform suitable to deploy public and private cloud environments of any
size. The auto-scaling features of this cloud platform have also been tested effectively on recent
publications [
49
,
50
]. The Openstack deployment for the experimental testbed was composed by one
controller node and one compute node. The controller runs core Openstack services and it also holds
the Networking (Neutron), Compute (Nova) essentials, Telemetry (Ceilometer) and Message Queue
(RabbitMQ) services. In addition, it runs the Orchestration (Heat) services to allow the configuration
of auto-scaling policies. The compute node runs in a separate server, hosting the Nova core services.
A new Compute instance has been launched to deploy the web service used for experimentation.
This virtual instance runs an Ubuntu 16.04-x64 server with 8 CPU cores and 8 GB of RAM memory.
On top of the operating system, a REST (Representational State Transfer) web service written
in Flask [
51
] has been implemented. A REST web service has been chosen due to its simplicity and
rapid development. REST is the predominant web API design model built upon HTTP methods [52],
which accommodates the system to interact with several entities (i.e., humans, IoT devices). In REST
every client request (1) only generates a single server response (one-shot) and (2) every response must
be generated immediately (one-way) [
53
]. This request-response model is suitable to focus the analysis
on the measurement of CPU processing times, by tracking the connected user and the impact of its
client requests on the CPU consumption.
In addition to the web service, two modules were developed to be run in the background:
The HTTP Usage Monitor module and the Entropy Modeler. The former logs information regarding
the monitoring of client requests processing times, whereas the latter performs novelty detection
methods to trigger anomaly-based alerts to the Openstack orchestration services.
On the client-side, a set of REST-clients have been deployed to generate traffic according to several
execution scenarios. The implementation details and characteristics of the components tested in the
experimentation stage are explained in the forthcoming sections.
4.2. Server-Side Components
The following describes the deployed server-side components: RESTful Web Service, HTTP Usage
Monitor and the Entropy Modeler.
4.2.1. RESTful Web Service
To facilitate a seamless interaction with HTTP clients, a REST web service has been implemented
on Flask, a Python-based framework for rapid development of web applications. The REST service
exposes four HTTP endpoints that produce the execution of different list-sorting operations on the
Entropy 2017,19, 649 9 of 16
server, each of them consumes a different amount of CPU time which is measured in the background.
The endpoints and their average execution times are summarized in Table 1.
Table 1. HTTP GET endpoints and CPU average cost.
URI Parameters Average CPU Time in Second (1000 exec.)
/1 ?id={clientID} 0.02158
/2 ?id={clientID} 0.02781
/3 ?id={clientID} 0.03673
/4 ?id={clientID} 0.33604
4.2.2. HTTP Usage Monitor
Once the server receives a client HTTP request, the Usage Monitor module permanently measures
the amount of CPU time consumed to process the request before sending the response back to the client.
The module makes use of Python libraries and standard Linux utilities to track the CPU consumption
per each client request. The collected data is then aggregated per client in configurable time intervals
before being logged to the system. If more than one client connection is being observed in the given
time interval, only the sum (aggregated metric) of all the processing times is logged. This allows the
creation of a time series, required for the next processing level.
4.2.3. Entropy Modeler
This module gathers the time series logged by the HTTP Usage Monitor and computes the entropy
of the CPU time usage of the different requests performed by each client. With the resultant normalized
entropy, the module forecasts the next
h
observations for the given time series, in conformance with
the ARIMA model. The predicted values are taken to estimate the forecasting upper and lower
thresholds. Whenever the resultant entropy falls outside the prediction intervals, a Traffic Anomaly
alert is reported to the auto-scaling engine of the corresponding Cloud platform (i.e., Openstack Heat).
4.3. Client-Side Component
On a separate server, several clients have been implemented as Python multi-threading scripts
for HTTP traffic generation, which is sent to the web service hosted in the Openstack virtual machine
instance. The generated number of traffic requests is a discrete variable that follows a random
Poisson distribution, since their similarity with this distribution is widely assumed by the research
community [
54
]. It is modeled according to the traffic load requirements for each evaluation scenario.
Every client is represented by a process thread, which models multiple parallel clients handling their
own sets of requests independently from others. When normal network conditions are modeled, all
the clients send an HTTP GET request to the lower CPU-consuming requests (endpoints 1–3) described
in Table 1. When an attacker is modeled, it only calls the most complex endpoint (4), which has higher
CPU demands at server-side. Note that GET requests can also accept the client ID as a parameter.
It facilitates the implementation of different client connections originated in the same computer since
all the thread-based clients share the same source IP address, but are differentiated by client ID.
4.4. Test Scenarios
Five main scenarios have been showcased to validate the proposal. All of them compare the
entropy levels of CPU processing times under normal traffic conditions against the entropy measured
when an EDoS attack is launched. Those attacks target to produce CPU overhead. Therefore, the attack
decrements the server capacities to handle more connections, and it forces the decision to scale up
the current virtual machine instance when the CPU usage is above a pre-defined CPU limit in the
Cloud-platform auto-scaling engine. The set of network traffic conditions described in Table 2are
assumed throughout the experiments. There, clients (C) generate the total number of web requests
Entropy 2017,19, 649 10 of 16
(TR) at the expected rate (ERS). It is worth remarking that ERS corresponds to the expected number of
occurrences (
λ
) of the Poisson distribution. Therefore, the generated web requests represent the sample
of connections to be analyzed. The MTR observation number (5000) is the frontier that divides the TR
into two groups of 5000 client requests each. The first one operates under the normal traffic conditions
described in Table 2; whereas a percentage of the second group contains the malicious requests, letting
the remaining connections to operate under the normal conditions. For instance, in the second group
a 5% malicious requests rate indicates that 250 malicious requests and 4750 normal requests were
observed. Table 3defines the evaluation scenarios (E1 to E5) considered to deploy the EDoS attacks.
Table 2. Normal traffic conditions for experiments.
Characteristic Value
Web clients (C) 500
Expected requests per second (ERS) 60
Total web requests (TR) 10,000
Malicious Triggering Request (MTR) 5000
Table 3. Network attack conditions and scenarios.
Parameter E1 E2 E3 E4 E5
Malicious Request Rate (MRR) 1% 5% 10% 15% 20%
Attacker Clients (AC) 5 25 50 75 100
Total number of malicious requests (TR MTR)×MRR 150 250 500 750 1000
The experiments performed for each scenario started their execution with the normal web traffic
conditions (first group of connections), with all the participant clients requesting the endpoints
1–3, as explained before. However, at the time specified by the MTR connection, the attack was
launched. It compromised several normal clients (C), which sent malicious requests to the endpoint
4, thus increasing the CPU overhead. It is important to remark that the attackers connect to the
server under the same ratio (ERS) configured for normal clients, making them unnoticeable since their
connection rate resembled legitimate traffic, but they targeted to exploit the highest time-consuming
endpoint which was exposed as a service vulnerability. To validate the proposal, it has been considered
a Cloud auto-scaling policy, configured to launch a new virtual machine instance when the CPU
consumption ran above 40% in a one minute interval.
5. Results
The experiments were performed with the parametrization presented in Table 3, adapted to each
evaluation scenario. The first monitored metric was the CPU time consumption caused to process
web requests launched from clients. A summary of the CPU consumption of the server, measured
on one-second intervals, is depicted in Figure 4. There, in all scenarios, half of the client connections
exposed the same behavior until the attack was triggered (MTR). From that moment on, the CPU
overhead was influenced by the traffic attack volume described in Table 3. Bearing in mind the defined
auto-scaling policy, it is noted that the scenarios E3, E4 and E5 would have automatically launched a
new virtual machine instance if the presence of the attack had been unnoticed. Hence demonstrating
the consequences of the EDoS threats and bringing the attack detection strategy to play an essential
role. On the other hand, besides the CPU estimation, the entropy of the per-client processing time
was constantly measured by the Entropy Modeler on one-second intervals, as plotted in Figure 5.
The graph shows that the overall behavior of the entropy was contrary to the behavior noticed in the
CPU overhead with the higher entropy values before the MTR observation. The slumped entropy level
was slightly noticeable on scenario E1 (Figure 5a), but became quite more perceptible on scenarios
E2 to E5 (Figure 5b to Figure 5e). Thereby, this pattern was directly influenced by the presence of the
Entropy 2017,19, 649 11 of 16
compromised devices, decreasing the entropy as long as more malicious requests were generated.
Only when the entropy was measured for the observed time, the Entropy Modeler estimated the
prediction thresholds to infer if the observed entropy was running outside the predicted intervals, thus
leading to the decision of triggering an alert if the EDoS attack was detected.
0 20 40 60 80 100 120 140 160
0
20
40
60
80
100
CPU Usage (%)
Observation
E1
E2
E3
E4
E5
Figure 4. Average CPU consumption per scenario.
The precision observed at the Receiver Operating Characteristic (ROC) space is summarized
in Figure 6. There five curves are illustrated, each one associated with one of the aforementioned
evaluation scenarios (E1, E2, E3, E4, E5). Table 4compiles several evaluation metrics (True Positive Rate
(TPR), False Positive Rate (FPR) and Area Under Curve (AUC)) and the best calibrations (K) to reach
the highest accuracy. Bearing in mind these results, it is possible to deduce that the proposed method
has proven to be more effective when the attack is originated from a larger number of compromised
nodes (e.g., E5 with 20% of the total number of connected clients). This is because a greater number of
instances of the random variable X represent similar probabilities, which leads to a more significant
decrease in the
H(X)
entropy, and therefore to display less concordance with the normal observations.
On the other hand, labeling errors have occurred mainly due to issuing false positives, in situations
where fluctuations of
H(X)
derived from changes in the behavior of legitimate clients acquire a similar
relevance to those inferred by malicious activities. Note that the larger is the number of compromised
nodes that take part of the attacks, the greater possibility of forcing auto-escalating reactions. Based on
this fact it is possible to state that the proposed method improves its detection capabilities when facing
more harmful threats. In addition, the existence of a
K
calibration parameter allows operators to easily
configure the level of restriction in which the system operates: When greater discretion is required,
K
must adopt higher values. This considerably reduces the likelihood of issuing false alerts, hence
facilitating to minimize the cost of the countermeasures to be applied. On the opposite case, when the
monitoring environments require greater protection it is advisable to decrease
K
, hence improving the
possibility of detecting threats, but potentially leading to deploy more unnecessary countermeasures.
Table 4. Summary of results in ROC space.
Scenario AUC (Trapezoidal) TPR FPR K
E1 0.8858 0.7480 0.17 0.160
E2 0.9637 0.9630 0.09 0.163
E3 0.9766 0.9680 0.08 0.160
E4 0.9794 0.9644 0.06 0.160
E5 0.9830 0.9431 0.03 0.167
Entropy 2017,19, 649 12 of 16
20 40 60 80 100 120 140 160
2
2.5
3
3.5
4
4.5
Entropy
Observation
(a) Entropy evolution in E1
20 40 60 80 100 120 140 160
2
2.5
3
3.5
4
4.5
Entropy
Observation
(b) Entropy evolution in E2
20 40 60 80 100 120 140 160
2
2.5
3
3.5
4
4.5
Entropy
Observation
(c) Entropy evolution in E3
20 40 60 80 100 120 140 160
2
2.5
3
3.5
4
4.5
Entropy
Observation
(d) Entropy evolution in E4
20 40 60 80 100 120 140 160
2
2.5
3
3.5
4
4.5
Entropy
Observation
(e) Entropy evolution in E5
Figure 5. Entropy measurements per scenario.
Entropy 2017,19, 649 13 of 16
0 0.1 0.2 0.3 0.4 0.5
0
0.2
0.4
0.6
0.8
1
TPR (Sensitivity)
FPR (1−Specificity)
E1
E2
E3
E4
E5
Figure 6. Results in ROC space.
6. Conclusions
In this paper, an entropy-based model for the detection of EDoS attacks in Cloud environments
has been introduced. For this purpose, a comprehensive revision of the EDoS related research has been
covered to elaborate a multi-layered architecture tackling the detection of EDoS attacks. The proposed
work suggested good detection accuracy, thus preventing the unnecessary consumption of additional
Cloud-resources if they were issued by auto-scaling policies based on unreal demands.
The experiments conducted to validate the proposed architecture have encompassed all the stages
defined in the architecture, starting from the monitoring and aggregation of metrics that directly affect
the Cloud computing cost model, the novelty detection procedures to recognize an EDoS attack, and
the decision-making and response actions to be applied in the system. The experimental testbed
implemented a client-server REST architecture executed on different network scenarios. On the web
server, the monitored per-client CPU times have been evaluated by analyzing the entropy levels,
which have exposed a decrement when malicious requests originated by the compromised nodes
have been processed at server-side. In such scenarios, entropy has behave indirectly proportional
to the consumed CPU. In addition, the detection method has also demonstrated its effectiveness
when predicting the entropy thresholds to be compared against the real measured entropy. Thereby,
this approach has proven high accuracy by quantifying the area under the ROC curve. It is also
worth mentioning the enhancement of the proposed model compared to other resource-consuming
approaches presented in the literature; such as the requesting of large files, database queries, or other
web vulnerabilities; since this architecture relies on server-side consumption rather than anomalous
network-level metric patterns.
The presented approach, evaluation methodology and the experiments conducted throughout
this work poses also new potential research lines. The experimental scenarios should be extended
to couple diverse network conditions to either enhance the validation or to disclose some evasion
techniques. The defined model of measuring the resource consumption and diagnosing its entropy can
be accommodated to include more metrics, thus extending its scope to wider analysis scenarios.
Furthermore, it might be fitted to enhance adaptive auto-scaling policies on Cloud platforms
by incorporating more complex evaluation criteria. Finally, the existing decision-making and
countermeasures to EDoS attacks remain far from being evolved, and might effectively complement
the conducted research.
Acknowledgments:
This work is supported by the European Commission Horizon 2020 Programme under grant
agreement number H2020-ICT-2014-2/671672 - SELFNET (Framework for Self-Organized Network Management
in Virtualized and Software Defined Networks).
Author Contributions: The authors contributed equally to this research.
Entropy 2017,19, 649 14 of 16
Conflicts of Interest: The authors declare no conflict of interest.
References
1.
European Union Agency for Network and Information Security (ENISA) Threat Landscape Report 2016.
Available online: https://www.enisa.europa.eu/publications/enisa-threat-landscape-report-2016 (accessed
on 28 November 2017).
2.
Kolias, C.; Kambourakis, G.; Stavrou, A.; Voas, J. DDoS in the ioT: Mirai and other botnets. Computer
2017
,
50, 80–84.
3.
European Comission Cybersecurity Stratregy. 2017. Available online: https://ec.europa.eu/digital-single-
market/en/policies/cybersecurity (accessed on 28 November 2017).
4.
US National Cyber Incident Response Plan (NCIRP). 2017. Available online: https://www.us-cert.gov/ncirp
(accessed on 28 November 2017).
5.
European Police (Europol). The Internet Organised Crime Threat Assessment (IOCTA). 2017.
Available online: https://www.europol.europa.eu/activities-services/main-reports/internet-organised-
crime-threat-assessment-iocta-2017 (accessed on 28 November 2017).
6.
Wei, W.; Chen, F.; Xia, Y.; Jin, G. A rank correlation based detection against distributed reflection DoS attacks.
IEEE Commun. Lett. 2013,17, 173–175.
7.
Zargar, S.T.; Joshi, J.; Tipper, D. A survey of defense mechanisms against distributed denial of service (DDoS)
flooding attacks. IEEE Commun. Surv. Tutor. 2013,15, 2046–2069.
8.
Baig, Z.A.; Sait, S.M.; Binbeshr, F. Controlled access to cloud resources for mitigating Economic Denial of
Sustainability (EDoS) attacks. Comput. Netw. 2016,97, 31–47.
9.
Chris, H. Cloud Computing Security: From DDoS (Distributed Denial Of Service) to EDoS (Economic Denial
of Sustainability). 2008. Available online: http://rationalsecurity.typepad.com/blog/2008/11/cloud-
computing-security-from-ddos-distributed-denial- of-service-to-edos-economic-denial-of-sustaina.html
(accessed on 28 November 2017).
10.
Chris, H.A Couple of Follow-Ups on the EDoS (Economic Denial of Sustainability) Concept
. . .
2009.
Available online: http://rationalsecurity.typepad.com/blog/edos/ (accessed on 28 November 2017).
11.
Reuven, C. Cloud Attack: Economic Denial of Sustainability (EDoS). Available online: http://www.
elasticvapor.com/2009/01/cloud-attack-economic-denial-of.html (accessed on 28 November 2017).
12.
Singh, P.; Manickam, S.; Rehman, S.U. A survey of mitigation techniques against Economic Denial of
Sustainability (EDoS) attack on cloud computing architecture. In Proceedings of the IEEE 3rd International
Conference on Reliability, Infocom Technologies and Optimization (ICRITO), Noida, India,
8–10 October 2014
;
pp. 1–4.
13.
Bremler-Barr, A.; Brosh, E.; Sides, M. DDoS attack on cloud auto-scaling mechanisms. In Proceedings of
the IEEE Conference on Computer Communications (INFOCOM 2017), Atlanta, GA, USA, 1–4 May 2017;
pp. 1–9.
14.
Somani, G.; Gaur, M.S.; Sanghi, D.; Conti, M. DDoS attacks in cloud computing: Collateral damage to
non-targets. Comput. Netw. 2016,109, 157–171.
15.
Somani, G.; Gaur, M.S.; Sanghi, D.; Conti, M.; Buyya, R. DDoS attacks in cloud computing: Issues, taxonomy,
and future directions. Comput. Commun. 2017,107, 30–48.
16.
Bhingarkar, A.S.; Shah, B.D. A survey: Securing cloud infrastructure against edos attack. In Proceedings of
the International Conference on Grid Computing and Applications (GCA), Athens, Greece, 27–30 July 2015;
pp. 16-22.
17.
Vivinsandar, S.; Shenai, S. Economic Denial of Sustainability (EDoS) in Cloud Services Using HTTP and
XML Based DDoS Attacks. Int. J. Comput. Appl. 2012,41, 11–16.
18.
Zhou, W.; Jia, W.; Wen, S.; Xiang, Y.; Zhou, W. Detection and defense of application-layer DDoS attacks in
backbone web traffic. Future Gener. Comput. Syst. 2014,38, 36–46.
19.
Singh, K.; Dee, T. MLP-GA based algorithm to detect application layer DDoS attack. J. Inf. Secur. Appl.
2017
,
36, 145–153.
20.
Singh, K.; Singh, P.; Kumar, K. Application layer HTTP-GET flood DDoS attacks: Research landscape and
challenges. Comput. Secur. 2017,65, 344–372.
Entropy 2017,19, 649 15 of 16
21.
Singh, A.; Chatterjee, K. Cloud security issues and challenges: A survey. J. Netw. Comput. Appl.
2017
,79,
88–115.
22.
Berezinski, P.; Jasiul, B.; Szpyrka, M. An entropy-based network anomaly detection method. Entropy.
2015
,
17, 2367–2408.
23.
Bawa, P.S.; Manickam, S. Critical Review of Economical Denial of Sustainability (EDoS) Mitigation
Techniques. J. Comput. Sci. 2015,11, 855–862.
24.
Idziorek, J.; Tannian, M.; Jacobson, D. Attribution of fraudulent resource consumption in the cloud.
In Proceedings of the IEEE 5th International Conference on Cloud Computing, Honolulu, HI, USA,
24–29 June 2012; pp. 99–106.
25.
Koduru, A.; Neelakantam, T.; Bhanu, S.M.S. Detection of Economic Denial of Sustainability Using Time
Spent on a Web Page in Cloud. In Proceedings of the IEEE International Conference on Cloud Computing in
Emerging Markets (CCEM), Bangalore, India, 16–18 October 2013; pp. 1–4.
26.
Al-Haidari, F.; Sqalli, M.H.; Salah, K. Enhanced EDoS-Shield for Mitigating EDoS Attacks Originating from
Spoofed IP Addresses. In Proceedings of the IEEE 11th International Conference on Trust, Security and
Privacy in Computing and Communications (TrustCom), Liverpool, UK, 25–27 June 2012; pp 1167–1174.
27.
Singh, K.J.; Thongam, K.; De, T. Entropy-Based Application Layer DDoS Attack Detection Using Artificial
Neural Networks. Entropy 2016,18, 350.
28.
Idziorek, J.; Tannian, M. Exploiting Cloud Utility Models for Profit and Ruin. In Proceedings of the IEEE
International Conference on Cloud Computing (CLOUD), Washington, DC, USA, 4–9 July 2011; pp. 33–40.
29.
Yu, S.; Tian, Y.; Guo, S.; Wu, D.O. Can We Beat DDoS Attacks in Clouds? IEEE Trans. Parallel Distrib. Syst.
2014,25, 2245–2254.
30.
Masood, M.; Anwar, Z.; Raza, S.A.; Hur, M.A. EDoS armor: A cost effective economic denial of sustainability
attack mitigation framework for e-commerce applications in cloud environments. In Proceedings of the IEEE
16th International Multi Topic Conference (INMIC), Lahore, Pakistan, 19–20 December 2013; pp. 37–42.
31.
Khor, H.; Nakao, A. Spow: On-demand cloud-based eDDoS mitigation mechanism. In Proceedings of
the IEEE/IFIP International Conference on Dependable Systems & Networks (DSN), Lisbon, Portugal,
29 June–2 July 2009.
32.
Kumar, M.N.; Sujatha, P.; Kalva, V.; Nagori, R.; Katukojwala, A.K.; Kumar, M. Mitigating Economic Denial of
Sustainability (EDoS) in Cloud Computing Using In-Cloud Scrubber Service. In Proceedings of the IEEE 4th
International Conference on Computational Intelligence and Communication Networks (CICN), Mathura,
India, 3–5 November 2012; pp. 535–539.
33.
Alosaimi, W.; Al-Begain, K. A new method to mitigate the impacts of the economical denial of sustainability
attacks against the cloud. In Proceedings of the 14th Annual Post Graduates Symposium on the convergence
of Telecommunication, Networking and Broadcasting (PGNet), Liverpool, UK, 24–25 June 2013; pp. 116–121.
34.
Liu, J.K.; Au, M.H.; Huang, X.; Lu, R.; Li, J. Fine-Grained Two-Factor Access Control for Web-Based Cloud
Computing Services. IEEE Trans. Inf. Forensics Secur. 2016,11, 484–497.
35.
Yan, Q.; Yu, F.R.; Gong, Q.; Li, J. Software-Defined Networking (SDN) and Distributed Denial of Service
(DDoS) Attacks in Cloud Computing Environments: A Survey, Some Research Issues, and Challenges.
IEEE Commun. Surv. Tutor. 2016,18, 602–622.
36. Alenezi, N.M.; Reed, M.J. Uniform DoS traceback. Comput. Secur. 2014,45, 17–26.
37.
Yao, G.; Bi, J.; Vasilakos, A.V. Passive IP traceback: Disclosing the locations of IP spoofers from path
backscatter. IEEE Trans. Inf. Forensics Secur. 2015,10, 471–484.
38.
Jeong, E.; Lee, B. An IP traceback protocol using a compressed hash table, a sinkhole router and data mining
based on network forensics against network attacks. Futur. Gener. Comput. Syst. 2014,33, 42–52.
39.
Wang, K.; Du, M.; Maharjan, S.; Sun, Y. Strategic Honeypot Game Model for Distributed Denial of Service
Attacks in the Smart Grid. IEEE Trans. Smart Grid,2017,8, 2474–2482.
40.
Al-Salah, T.; Hong, L.; Shetty, S. Attack Surface Expansion Using Decoys to Protect Virtualized Infrastructure.
In Proceedings of the 2017 IEEE International Conference on Edge Computing (EDGE), Honolulu, HI, USA,
25–30 June 2017; pp. 216–219.
41. Shannon, C.E. A mathematical theory of communication. Bell Syst. Tech. J. 1948,27, 379–656.
42.
Bhuyan, M.H.; Bhattacharyya, D.K.; Kalita, J.K. An empirical evaluation of information metrics for low-rate
and high-rate DDoS attack detection. Pattern Recognit. Lett. 2015,51, 1–7.
Entropy 2017,19, 649 16 of 16
43.
Pimentel, M.A.F.; Clifton, D.A.; Clifton, L.; Tarassenko, L. A review on novelty detection. Signal Process.
2014
,
99, 215–249.
44.
Hillmer, S.C.; Tiao, G.C. An ARIMA-Model-Based Approach to Seasonal Adjustment. J. Am. Stat. Assoc.
1980,77, 63–70.
45.
Ong, C.S.; Huang, J.J.; Tzeng, G.H. Model identification of ARIMA family using genetic algorithms.
Appl. Math. Comput. 2005,164, 885–912.
46.
Hyndman, R.J.; Koehler, A.B.; Ord, J.K.; Snyder, R.D. Prediction intervals for exponential smoothing state
space models. J. Forecast. 2005,24, 17–37.
47.
Chandola, V.; Banerjee, A.; Kumar, V. Anomaly Detection : A Survey. ACM Comput. Surv.
2009
,41,
doi:10.1145/1541880.1541882.
48.
Open Source Sotware for Creating Private and Public Clouds. Available online: https://www.openstack.org
(accessed on 28 November 2017).
49.
Kang, S.; Lee, K. Auto-scaling of Geo-based image processing in an OpenStack cloud computing environment.
Remote Sens. 2016,8, 662.
50.
Krieger, M.T.; Torreno, O.; Trelles, O.; Kranzlmuller, D. Building an open source cloud environment with
auto-scaling resources for executing bioinformatics and biomedical workflows. Futur. Gener. Comput. Syst.
2017,67, 329–340.
51.
Flask-A Python Microframework. Available online: http://flask.pocoo.org (accessed on 28 November 2017).
52.
Schnase, J.L.; Duffy, D.Q.; Tamkin, G.S.; Nadeau, D.; Thompson, J.H.; Grieg, C.M.; Mclnerney, M.A.;
Webster, W.P. MERRA analytic services: Meeting the big data challenges of climate science through
cloud-enabled climate analytics-as-a-service. Comput. Environ. Urban Syst. 2017,61, 198–211.
53.
Fielding, R.T.; Taylor, R.N.; Erenkrantz, J.R.; Gorlick, M.M.; Whitehead, J.; Khare, R.; Oreizy, P. Reflections on
the REST architectural style and principled design of the modern web architecture (impact paper award).
In Proceedings of the 11th Joint Meeting on Foundations of Software Engineering, Paderborn, Germany,
4–8 September 2017; pp. 4–14.
54.
Barakat, C.; Thiran, P.; Iannaccone, G.; Diot, C.; Owezarski, P. Modeling Internet backbone traffic at the flow
level. IEEE Trans. Signal Process. 2003,51, 2111–2124.
c
2017 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access
article distributed under the terms and conditions of the Creative Commons Attribution
(CC BY) license (http://creativecommons.org/licenses/by/4.0/).
... Some statistical methods such as entropy [16] or fuzzy [17] are proposed to detect EDoS attacks. In [16], the authors achieved a good detection accuracy. ...
... Some statistical methods such as entropy [16] or fuzzy [17] are proposed to detect EDoS attacks. In [16], the authors achieved a good detection accuracy. However, the method was experimented on in an extremely simple testbed, which raises doubt regarding its performance in real-world performance. ...
Article
Full-text available
Cloud computing is currently considered the most cost-effective platform for offering business and consumer IT services over the Internet. However, it is prone to new vulnerabilities. A new type of attack called an economic denial of sustainability (EDoS) attack exploits the pay-per-use model to scale up the resource usage over time to the extent that the cloud user has to pay for the unexpected usage charge. To prevent EDoS attacks, a few solutions have been proposed, including hard-threshold and machine learning-based solutions. Among them, long short-term memory (LSTM)-based solutions achieve much higher accuracy and false-alarm rates than hard-threshold and other machine learning-based solutions. However, LSTM requires a long sequence length of the input data, leading to a degraded performance owing to increases in the calculations, the detection time, and consuming a large number of computing resources of the defense system. We, therefore, propose a two-phase deep learning-based EDoS detection scheme that uses an LSTM model to detect each abnormal flow in network traffic; however, the LSTM model requires only a short sequence length of five of the input data. Thus, the proposed scheme can take advantage of the efficiency of the LSTM algorithm in detecting each abnormal flow in network traffic, while reducing the required sequence length of the input data. A comprehensive performance evaluation shows that our proposed scheme outperforms the existing solutions in terms of accuracy and resource consumption.
... The auto-scaling at VM level applies to the number of processing units, memory, networking components, etc [6]. Most presumably used auto-scaling metrics from the performance standpoint are threshold and duration [7]. These two metrics are considered necessary for triggering auto-scaling based on the need. ...
... Module 3 generates alert, initiates rule update process and blocks the attacking IP. In [7] an entropy-based architecture is proposed for the detection of EDoS attacks. The proposed multilayered architecture involves monitoring and aggregation of metrics that affect the cost model, the novelty detection procedures to detect EDoS attack, and the decision-making and action response procedures. ...
... EDoS attacks may be detected using statistical approaches, such as entropy and fuzzy methods [27]. The detection accuracy is excellent [28]. ...
... EDoS attacks may be detected using statistical approaches, such as entropy and fuzzy methods [27]. The detection accuracy is excellent [28]. ...
Article
Full-text available
Cloud computing is currently the most cost-effective means of providing commercial and consumer IT services online. However, it is prone to new flaws. An economic denial of sustainability attack (EDoS) specifically leverages the pay-per-use paradigm in building up resource demands over time, culminating in unanticipated usage charges to the cloud customer. We present an effective approach to mitigating EDoS attacks in cloud computing. To mitigate such distributed attacks, methods for detecting them on different cloud computing smart grids have been suggested. These include hard-threshold, machine, and deep learning, support vector machine (SVM), K-nearest neighbors (KNN), random forest (RF) tree algorithms, namely convolutional neural network (CNN), and long short-term memory (LSTM). These algorithms have greater accuracies and lower false alarm rates and are essential for improving the cloud computing service provider security system. The dataset of nine injection attacks for testing machine and deep learning algorithms was obtained from the Cyber Range Lab at the University of New South Wales (UNSW), Canberra. The experiments were conducted in two categories: binary classification, which included normal and attack datasets, and multi-classification, which included nine classes of attack data. The results of the proposed algorithms showed that the RF approach achieved accuracy of 98% with binary classification , whereas the SVM model achieved accuracy of 97.54% with multi-classification. Moreover, statistical analyses, such as mean square error (MSE), Pearson correlation coefficient (R), and the root mean square error (RMSE), were applied in evaluating the prediction errors between the input data and the prediction values from different machine and deep learning algorithms. The RF tree algorithm achieved a very low prediction level (MSE = 0.01465) and a correlation R 2 (R squared) level of 92.02% with the binary classification dataset, whereas the algorithm attained an R 2 level of 89.35% with a multi-classification dataset. The findings of the proposed system were compared with different existing EDoS attack detection systems. The proposed attack mitigation algorithms, which were developed based on artificial intelligence, outperformed the few existing systems. The goal of this research is to enable the detection and effective mitigation of EDoS attacks.
... When zombie machines transmit a big quantity of unintended traffic in the direction of the cloud, exploiting the cloud's scalability in chalking up an exorbitant quantity of price on a cloud adopter's receipt, an EDoS attack befalls (Agrawal and Tapaswi 2020;Monge 2017). The attack makes the cloud unsustainable via fading the cloud billing to impose the cloud user's bill intended for the attack's activities. ...
Article
Cloud computing (CC) permits the end-users to access the network via a shared resources field. The vulnerabilities on the service providers will augment with the augmentation of the demand for CC. Economical denial of service (EDoS) attacks take over the supplier financially affecting the disparate organizations that utilize the cloud data. It is not possible to identify the hackers subsequent to EDoS attacks; however, their passage can well be detected and blocked. Nevertheless, loads of challenges are there to surmount. The work renders an effectual EDoS–Dome system in CC. Secured user authentication is initiated here with the instigation of the secret question key technique in the user registration together with the verification phase. After that, for the effective tracing back of the hacked data, an effectual Obfuscation technique is developed aimed at IP spoofing. To attain fast response time (RT) along with block the passage of attackers, a CI-RDA load balancer is developed. Lastly, the developed regression coefficients deer hunting-deep Elman neural network classifies the user data into a blacklist or white list centered on particular conditions. The experimentation's outcomes exhibit that the proposed work is effectual with 97.01% accuracy and 97.05% recalls when weighed against prevailing methods to classify the attacks. It also encompasses lower cost as well as fast RT with the equivalent web services, which signifies a safe model in opposition to the EDoS attack.
... As highlighted in [29], beyond the economic impact, EDoS attacks entail several cross-cutting situations, which among others concern the computational capabilities of the cloud, performance, latency, connectivity, availability; and from the socio-technical aspect negatively affect the trust between customers and Digital Service Providers (DSP). Although in [29] was demonstrated that the implementation of cybersecurity measures based on predicting the behavior of the protected system, constructing adaptive thresholds, and clustering of VNFs instances based on productivity, were effective enough to reveal EDoS threats [34], their prevention, detection, mitigation and attribution still entail important research challenges, to which is added that the bibliography does not include a large collection of publications focused on the defense against EDoS threats. The studies that address this problem usually assume metrics at networklevel, usually confusing features for EDoS identification with those that typically detect flooding-based DDoS behaviors [35][36][37]. ...
Article
The last decade consolidated the cyberspace as fifth domain of military operations, which extends its preliminarily intelligence and information exchange purposes towards enabling complex offensive and defensive operations supported/supportively of parallel kinetic domain actuations. Although there is a plethora of well documented cases on strategic and operational interventions of cyber commands, the cyber tactical military edge is still a challenge, where cyber fires barely integrate to the traditional joint targeting cycle due to, among others, long planning/development times, asymmetric effects, strict target reachability requirements, or the fast propagation of collateral damage; the latter rapidly deriving on hybrid impacts (political, economic, social, etc.) and evidencing significant socio-technical gaps. In this context, it is expected that Tactical Clouds disruptively facilitate cyber operations at the edge while exposing the rest of the digital assets of the operation to them. On these grounds, the main purpose of the conducted research is to review and in depth analyze the risks and opportunities of jeopardizing the sustainability of the military Tactical Clouds at their cyber edge. Along with a 1) comprehensively formulation of the researched problematic, the study 2) formalizes the Tactical Denial of Sustainability (TDoS) concept; 3) introduces the phasing, potential attack surfaces, terrains and impact of TDoS attacks; 4) emphasizes the related human and socio-technical aspects; 5) analyzes the threats/opportunities inherent to their impact on the cloud energy efficiency; 6) reviews their implications at the military cyber thinking for tactical operations; 7) illustrates five extensive CONOPS that facilitate the understanding of the TDoS concept; and given the high novelty of the discussed topics, this paper 8) paves the way for further research and development actions.
... Entropy-based EDoS [15] To detect EDoS attacks, an entropy-based model is proposed. ...
Article
Full-text available
Cloud computing is now known as the most cost-effective platform for delivering big data and artificial intelligence services over the Internet to enterprises and cloud consumers. However, despite many recent security developments, many cloud consumers continue to express great concern about using these platforms because they still have significant vulnerabilities. Typically, Economic Denial of Sustainability (EDoS) attacks exploit the pay-as-you-go billing mechanisms used by cloud service providers, so that a cloud customer is forced to to pay an extra fee for the additional resources triggered by the attack activities. In our previous work, we already proposed an system to mitigate such EDoS attacks. Overall, this previous work presented an effective system for detecting abnormal events; however, the false-alarm rates still remain relatively high and detection rates are low, because abnormal events could be caused by the cloud customer. Furthermore, our previous work still consumes a large number of computing resources. Therefore, in this paper, we propose an enhanced scheme to detect and mitigate EDoS attacks efficiently and reliably. Our proposed scheme is composed of online and offline phases, implementing a gated recurrent unit, which not only can capture complex temporal dependence relations in the data, but also can reduce the vanishing gradient problems in time series. First, to reflect the normal patterns, our proposed scheme learns accurate representations of multivariate time series. Next, these representations are used to reconstruct input data. Finally, the reconstruction probabilities not only can be used to find anomalies, but also can provide interpretations. The proposed scheme also introduces a self-adjusting threshold to reduce error rates, whereas existing solutions normally use a hard threshold to analyze the anomalies, which causes increasing error rates. Our comprehensive analysis of the results shows outstanding performance compared to other solutions and our previous work.
... As highlighted in [82], beyond the economic impact, EDoS attacks entail several cross-cutting situations, which among others concern the computational capabilities of the cloud, performance, latency, connectivity, availability; and from the socio-technical aspect negatively affect the trust between customers and Digital Service Providers (DSP) (in both direction). Although in [82] was demonstrated that the implementation of cybersecurity measures based on predicting the behavior of the protected system, constructing adaptive thresholds, and clustering of VNFs instances based on productivity, were effective enough to reveal EDoS threats [85], their prevention, detection, mitigation and attribution still entail important research challenges, to which is added that the bibliography does not include a large collection of publications focused on the defense against EDoS threats. The studies that address this problem usually assume metrics at network-level, usually confusing features for EDoS identification with those that typically detect flooding-based DDoS behaviors [10][9] [6]. ...
Preprint
Full-text available
The last decade consolidated the cyberspace as fifth domain of operations, which extends its preliminarily intelligence and information exchange purposes towards enabling complex offensive and defensive operations supported/supportively of parallel kinetic domain actuations. Although there is a plethora of well documented cases on strategic and operational interventions of cyber commands, the cyber tactical military edge is still a challenge, where cyber fires barely integrate to the traditional joint targeting cycle due among others to long planning/development times, asymmetric effects, strict target reachability requirements, or the fast propagation of collateral damage; the latter rapidly deriving on hybrid impacts (political, economic, social, etc.) and evidencing significant socio-technical gaps. In this context, it is expected that tactical clouds disruptively facilitate cyber operations at the edge while exposing the rest of the digital assets of the operation to them. On these grounds, the main purpose of the conducted research is to review and in depth analyze the risks and opportunities of jeopardizing the sustainability of the military tactical clouds at the edge by cyber operations. Along with a 1) comprehensively formulation of the researched problematic, the study 2) formalizes the Tactical Denial of Sustainability (TDoS) concept; 3) introduces the phasing, potential attack surfaces, terrains and impact of TDoS attacks; 4) emphasizes the related human and socio-technical aspects; 5) analyzes the threats/opportunities inherent to their impact on the cloud energy efficiency; 6) reviews their implications at the military cyber thinking for tactical operations; 7) illustrates five extensive CONOPS that facilitate the understanding of the TDoS concept; and given the high novelty of the discussed topics, it 8) paves the way for further research and development actions.
Article
Full-text available
Distributed denial-of-service (DDoS) attack is one of the major threats to the web server. The rapid increase of DDoS attacks on the Internet has clearly pointed out the limitations in current intrusion detection systems or intrusion prevention systems (IDS/IPS), mostly caused by application-layer DDoS attacks. Within this context, the objective of the paper is to detect a DDoS attack using a multilayer perceptron (MLP) classification algorithm with genetic algorithm (GA) as learning algorithm. In this work, we analyzed the standard EPA-HTTP (environmental protection agency-hypertext transfer protocol) dataset and selected the parameters that will be used as input to the classifier model for differentiating the attack from normal profile. The parameters selected are the HTTP GET request count, entropy, and variance for every connection. The proposed model can provide a better accuracy of 98.31%, sensitivity of 0.9962, and specificity of 0.0561 when compared to other traditional classification models.
Article
Full-text available
The Mirai botnet and its variants and imitators are a wake-up call to the industry to better secure Internet of Things devices or risk exposing the Internet infrastructure to increasingly disruptive distributed denial-of-service attacks.
Article
Full-text available
Advanced Metering Infrastructure (AMI) is an important component for a smart grid system to measure, collect, store, analyze and operate users consumption data. The need of communication and data transmission between consumers (smart meters) and utilities make AMI vulnerable to various attacks. In this paper, we focus on Distributed Denial of Service (DDoS) attack in the AMI network. We introduce honeypots into the AMI network as a decoy system to detect and gather attack information. We analyze the interactions between the attackers and the defenders, and derive optimal strategies for both sides. We further prove the existence of several Bayesian-Nash Equilibriums (BNEs) in the honeypot game. Finally, we evaluate our proposals on an AMI testbed in the smart grid, and the results show that our proposed strategy is effective in improving the efficiency of defense with the deployment of honeypots.
Article
Full-text available
Application layer Distributed Denial of Service (DDoS) attacks have empowered conventional flooding based DDoS with more subtle attacking methods that pose an ever-increasing challenge to the availability of Internet based web services. These attacks hold the potential to cause similar damaging effects as their lower layer counterparts using relatively fewer attacking assets. Being the dominant part of the Internet, HTTP is the prime target of GET flooding attacks, a common practice followed among various application layer DDoS attacks. With the presence of new and improved attack programs, identifying these attacks always seems convoluted. A swift rise in the frequency of these attacks has led to a favorable shift in interest among researchers. Over the recent years, a significant research contribution has been dedicated toward devising new techniques for countering HTTP-GET flood DDoS attacks. In this paper, we conduct a survey of such research contributions following a well-defined systematic process. A total of 63 primary studies published before August 2015 were selected from six different electronic databases following a careful scrutinizing process. We formulated four research questions that capture various aspects of the identified primary studies. These aspects include detection attributes, datasets, software tools, attack strategies, and underlying modeling methods. The field background required to understand the evolution of HTTP-GET flood DDoS attacks is also presented. The aim of this systematic survey is to gain insights into the current research on the detection of these attacks by comprehensively analyzing the selected primary studies to answer a predefined set of research questions. This survey also discusses various challenges that need to be addressed, and acquaints readers with recommendations for possible future research directions.
Article
Full-text available
Cloud computing is a base platform for the distribution of large volumes of data and high-performance image processing on the Web. Despite wide applications in Web-based services and their many benefits, geo-spatial applications based on cloud computing technology are still developing. Auto-scaling realizes automatic scalability, i.e., the scale-out and scale-in processing of virtual servers in a cloud computing environment. This study investigates the applicability of auto-scaling to geo-based image processing algorithms by comparing the performance of a single virtual server and multiple auto-scaled virtual servers under identical experimental conditions. In this study, the cloud computing environment is built with OpenStack, and four algorithms from the Orfeo toolbox are used for practical geo-based image processing experiments. The auto-scaling results from all experimental performance tests demonstrate applicable significance with respect to cloud utilization concerning response time. Auto-scaling contributes to the development of web-based satellite image application services using cloud-based technologies.
Article
Distributed Denial of Service (DDoS) attack is transforming into a weapon by the attackers, politicians, and cyber terrorists, etc. Today there is a quick ascent in the exploration field of mitigation and guard against DDoS attacks, however in actuality; the capabilities of the hackers are additionally growing. From early news of focusing on the network and transport layer, now a day's application layer becomes the point of convergence of the attacks. In the paper, we first analyze the features from incoming packets. These features include Hyper Text Transfer Protocol (HTTP) count, the number of the Internet Protocol (IP) address during a time window, the constant mapping of the port number and frame of the packets. In the paper, we write all the combinations of these metrics and then analyzed the client's behaviors from the public attack and normal data sets. We use Environmental Protection Agency-Hypertext Transfer Protocol (EPA-HTTP) DDoS, Center for Applied Internet Data Analysis (CAIDA) 2007 and experimentally produced DDoS data set using Slowloris attack to draw the efficiency and effectiveness of the features for layer seven DDoS detection. Second, we employ Multilayer Perceptron with a Genetic Algorithm (MLP-GA) to estimate the efficiency of the detection using the metrics. The experimental results show that MLP-GA provides the best efficiency of 98.04% for detecting the layer seven DDoS attacks. The proposed method provides a minimum value of False Positive when compared with traditional classifiers such as Naive Bayes, Radial Basis Function (RBF) Network, MLP, J48, and C45, etc.
Conference Paper
Seventeen years after its initial publication at ICSE 2000, the Representational State Transfer (REST) architectural style continues to hold significance as both a guide for understanding how the World Wide Web is designed to work and an example of how principled design, through the application of architectural styles, can impact the development and understanding of large-scale software architecture. However, REST has also become an industry buzzword: frequently abused to suit a particular argument, confused with the general notion of using HTTP, and denigrated for not being more like a programming methodology or implementation framework. In this paper, we chart the history, evolution, and shortcomings of REST, as well as several related architectural styles that it inspired, from the perspective of a chain of doctoral dissertations produced by the University of California's Institute for Software Research at UC Irvine. These successive theses share a common theme: extending the insights of REST to new domains and, in their own way, exploring the boundary of software engineering as it applies to decentralized software architectures and architectural design. We conclude with discussion of the circumstances, environment, and organizational characteristics that gave rise to this body of work.
Article
The cloud computing provides on demand services over the Internet with the help of a large amount of virtual storage. The main features of cloud computing is that the user does not have any setup of expensive computing infrastructure and the cost of its services is less. In the recent years, cloud computing integrates with the industry and many other areas, which has been encouraging the researcher to research on new related technologies. Due to the availability of its services & scalability for computing processes individual users and organizations transfer their application, data and services to the cloud storage server. Regardless of its advantages, the transformation of local computing to remote computing has brought many security issues and challenges for both consumer and provider. Many cloud services are provided by the trusted third party which arises new security threats. The cloud provider provides its services through the Internet and uses many web technologies that arise new security issues. This paper discussed about the basic features of the cloud computing, security issues, threats and their solutions. Additionally, the paper describes several key topics related to the cloud, namely cloud architecture framework, service and deployment model, cloud technologies, cloud security concepts, threats, and attacks. The paper also discusses a lot of open research issues related to the cloud security.