Conference PaperPDF Available

A Survey on Network Security Monitoring Systems

A Survey on Network Security Monitoring Implementations
Ibrahim Ghafir
Faculty of Informatics,
Masaryk University
School of Computing,
Mathematics and Digital
Metropolitan University
Jakub Svoboda
Faculty of Informatics,
Masaryk University
Brno, Czech Republic
Mohammad Hammoudeh
School of Computing,
Mathematics and Digital
Metropolitan University
Manchester, The UK
Vaclav Prenosil
Faculty of Informatics,
Masaryk University
Brno, Czech Republic
Network monitoring is a difficult and demanding task that
is a vital part of a network administrator’s job. Network
administrators are constantly striving to maintain smooth
operation of their networks. If a network were to be down
even for a small period of time, productivity within a com-
pany would decline, and in the case of public service depart-
ments the ability to provide essential services would be com-
promised. There are different network security approaches.
This paper provides the readers with an overview of concrete
software implementations of the current network monitoring
approaches. In addition, it presents a comparison between
those implementations.
Network security monitoring; packet capture; deep packet
inspection; flow observation.
Monitoring helps network and systems administrators to
identify possible issues before they affect business continuity
and to find the root cause of problems when something goes
wrong in the network. Whether it is a small business with
less than 50 nodes or a large enterprise with more than 1000
nodes, continuous monitoring helps to develop and maintain
a high performing network with little downtime.
For network monitoring to be a valuable addition to a net-
work, the monitoring design should adopt basic principles.
The monitoring system should be comprehensive and cover
every aspect of an enterprise, such as the network and con-
nectivity, systems as well as security. It would also be prefer-
Permission to make digital or hard copies of all or part of this work for personal or
classroom use is granted without fee provided that copies are not made or distributed
for profit or commercial advantage and that copies bear this notice and the full cita-
tion on the first page. Copyrights for components of this work owned by others than
ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or re-
publish, to post on servers or to redistribute to lists, requires prior specific permission
and/or a fee. Request permissions from
2016 ACM. ISBN 978-1-4503-2138-9.
DOI: 10.1145/1235
able if the system provides a single-pane-of-glass view into
everything about the network and includes reporting, prob-
lem detection, resolution, and network maintenance. Fur-
ther, every monitoring system should provide reports that
can cater to a different level of audiences, the network and
systems admin, as well as to management. Most impor-
tantly, a monitoring system should not be too complex to
understand and use, nor should it lack basic reporting and
drill down functionalities.
Network monitoring is a difficult and demanding task that
is a vital part of a network administrator’s job. Network
administrators are constantly striving to maintain smooth
operation of their networks. If a network were to be down
even for a small period of time, productivity within a com-
pany would decline, and in the case of public service de-
partments the ability to provide essential services would be
compromised. In order to be proactive rather than reactive,
administrators need to monitor traffic movement and per-
formance throughout the network and verify that security
breaches do not occur within the network.
There are different network security approaches [30]. This
paper provides the readers with an overview of concrete soft-
ware implementations of the current network monitoring ap-
proaches. In addition, it presents a comparison between
those implementations.
The remainder of this paper is organized as follows. Sec-
tion 2 classifies the current network security monitoring im-
plementations into three main classes. A comparison be-
tween presented implementations is provided in Section 3
and Section 4 concludes the paper.
This section classifies the current network security moni-
toring implementations into packet capture representatives,
deep packet inspection representatives and flow-based obser-
vation representatives. It also provides information about
the usefulness of particular tools for development of new
network traffic analysis methods.
2.1 Packet Capture Representatives
2.1.1 Tcpdump
Tcpdump is a command line tool for packet capture analy-
sis. Tcpdump can analyze both live traffic using the libpcap
library and captured packet traces in PCAP format. Packets
may be filtered both before and after the capture. Filtering
before the capture can be done using BPF (Berkeley Packet
Filter). Filtering after capture can be achieved using tcp-
dump’s filters, described later in this section.
Data are printed out in text format. The output displays
individual packets with information that include source and
destination addresses, L4 protocol used, and L4 protocol
flags. Figure 1 shows output listing two packets.
$ tcpdump -r pcap -n \
"src host and dst port 22 and tcp[13] = 2"
reading from file pcapfile, link-type EN10MB (Ethernet)
20:20:40.512613 IP > Flags [S],
seq 3123841387, win 29200, options [mss 1460,sackOK,TS val 509640
ecr 0,nop,wscale 7], length 0
20:20:41.356843 IP > Flags [S],
seq 1764864395, win 29200, options [mss 1460,sackOK,TS val 509851
ecr 0,nop,wscale 7], length 0
Figure 1: Example of a tcpdump filter and tcp-
dump’s output.
Packets to be displayed can be filtered using expressions.
Filters can be imposed on source and destination addresses,
ports, L3 and L4 protocols, and L4 protocol flags. Addresses
can be expressed in format of individual addresses or in
CIDR notation. Multiple rules in the expression can be com-
posed using boolean operators. The example on Figure 1
uses three filters. src host selects only packets
originating in the IP address dst port 22 selects
only packets destined for the port 22. Finally, tcp[13] = 2
selects packets in which the decimal value of the 14th byte
is 2. The filters are composed using the and word, which
means only packets that meet all the criteria pass through
the filter.
Tcpdump needs root privileges to open the network inter-
face. Operation without full root is possible using SUID
or Linux capabilities. Granting the tcpdump executable
cap net raw and cap net admin capabilities allows tcpdump
to be run as a regular user.
2.1.2 Wireshark
Wireshark is a graphical tool for packet capture analy-
sis. While Wireshark and tcpdump are implementations of
the same architectural approach, their underlying ideas dif-
fer. Tcpdump is as close to the raw data as possible, while
Wireshark strives to provide higher-level representation of
the same data.
Wireshark can analyze both live traffic using the libp-
cap library and captured packet traces in PCAP format.
Captured packets can be filtered both during and after the
capture. Filtering after capture can be achieved using fil-
ter expressions. Filtering during capture can be done using
Data are displayed as text arranged in a scrollable col-
ored list and expandable boxes. Wireshark’s main window
has two frames. The upper frame displays list of captured
packets with their basic attributes displayed. A line may be
colored, based on the protocol the individual packet belongs
to. When the user selects packet in the upper frame, this
particular packet is displayed in the lower frame. The lower
frame’s representation includes several boxes that can be ex-
panded and collapsed. The boxes contain various attributes
of the packet as well as representations of the packet’s data
in ISO OSI layers’ data the packet is part of. Also, related
data from other packets may be displayed there, HTTP TCP
stream for instance.
Packets displayed in the upper frame can be filtered using
expressions entered in the text box on the top of the window.
The expression vocabulary is richer that the one of tcpdump.
Figure 2 shows the architecture of Wireshark.
root user
Figure 2: Wireshark architecture showing privilege
Wireshark uses a separate program to capture the traf-
fic, dumpcap. The reason is to allow separation of privi-
leges [25]. Wireshark can be run as a regular user and only
dumpcap has to be given special permissions. Dumpcap
needs root privileges to open the network interface. Oper-
ation without the user having root access is possible using
either SUID or Linux capabilities. Granting the dumpcap ex-
ecutable cap net raw and cap net admin capabilities allows
dumpcap to be run as a regular user without SUID.
2.2 Deep Packet Inspection Representatives
2.2.1 Snort
Snort is an intrusion detection system performing deep
packet inspection using pattern matching. The pattern
matching is implemented in the form of rules [12]. Rules
are structured text files describing network traffic data of
interest. Typically, rules are used to generate alerts when
a security-related incident occurs, such as malware activity,
attack, or breach of security policies. A rule contains in-
formation specifying when the rule should be triggered. An
important part of these information is one or more patterns
that are searched in the network traffic. The pattern can
be a sequence of characters or bytes or a regular expression.
Figure 3 shows Snort rule structure.
rule header alert tcp any any -> 111
rule options (content:"|00 01 86 a5|"; msg:"mountd access";)
Figure 3: Snort rule structure
Snort rules have a specific structure. The beginning of the
rule before the parentheses describes which network flows
the rule refers to. This is called the rule header. The rule
header specifies the action the rule should perform (alert for
instance), L3 protocol and source/destination IP addresses
and ports on which to match. Variables may be used in place
of IP addresses. The rest of the rule inside the parentheses
is called the rule options. Rule options specify content on
which the rule matches and other properties of the rule, its
name, classification type, etc. The most important part of
the rule is the content keyword that specifies a pattern to be
found in the packet’s payload. The content keyword’s func-
tion can be further changed using modifiers. For instance,
modifiers offset, distance, depth, and within control in which
areas of the packet the rules are matched [21].
Snort has three special keywords, byte test,byte jump,
and byte extract, that allow to adjust the pattern match-
ing based on data in an individual packet [17]. The first two
keywords behave as patterns that match when their con-
ditions are true. byte test performs arithmetic (<,, =,
>,) and bitwise (AND, OR) comparisons on sequences
of bytes. byte jump puts a space before the next pattern
with size inferred from payload’s byte value. If the jump is
possible, byte jump also behaves as a match. This behavior
can be used to match packets with a specific length based
on specific data in the payload. byte extract converts spec-
ified bytes into a numerical variable that can be used later
in the rule. These keywords do not allow more complicated
decoding or processing.
The Snort’s architecture allows the implementation of so-
called preprocessors [22]. Preprocessors read the packet be-
fore rule evaluation, serially in the order specified by Snort’s
configuration. This allows implementation of additional rule
keywords. Moreover, preprocessors allow implementation of
functionality more complicated than just pattern matching,
such as data decoding and anomaly detection. For instance,
the Normalizer preprocessor converts equivalent values to a
unified format with the goal of making IDS evasion harder.
Snort’s architecture including the preprocessors is depicted
in Figure 4.
Packet Capture
Packet Decoding
Preprocessor 1
Preprocessor 2
Preprocessor 3Rules
Figure 4: Snort architecture.
There are several preprocessors for anomaly detection
available. Frag3 and stream5 preprocessors are integrated in
the official Snort distribution and detect protocol anomalies.
SPADE [18], PHAD [28], and snortad [2] are 3rd party pre-
processors detecting traffic anomalies. Snort preprocessors
are usually implemented in C. They allow implementation
of similar concepts to those that can be implemented in the
event-based architecture.
Snort needs root privileges to open the network interface.
It is possible to configure Snort to drop its privileges to a
non-root user once it opens the network interface.
Snort is a single-threaded application. Multithreaded
Snort setups work in the following way: The monitored traf-
fic is divided by flows into multiple parts; each part of the
traffic is fed to a single Snort instance
2.2.2 Suricata
Suricata is an intrusion detection system performing deep
packet inspection using pattern matching. Figure 5 shows
Suricata rule structure, the rule header and rule options are
as same as the Snort rule structure.
rule header alert tcp any any -> 111
rule options (content:"|00 01 86 a5|"; msg:"mountd access";)
Figure 5: Suricata rule structure.
Suricata uses similar rules to Snort and is compatible
with Snort rules. The rule structure is the same for both
Snort and Suricata. The difference between the two is in
the keywords and protocols that can be specified. Suricata
allows specification of several L7 protocols on top of the L3
protocols supported by Snort, http,ftp,tls,smb and dns
[14]. Some keywords behave differently than in Snort, for
instance, the fast pattern:only keyword doesn’t make a dif-
ference in processing, unlike in Snort. Some keywords are
supported only by Suricata, such as the iprep keyword for
matching IP reputation data and the dns query keyword for
analyzing only the DNS response body.
Suricata’s architecture is similar to Snort’s one with a dif-
ference. What corresponds to the preprocessor part in the
Snort’s architecture is divided in two in Suricata, decod-
ing and detection. We found out about this by studying
the source code [15]. Decoding modules add information to
the internal representation of packets in Suricata. Detec-
tion modules rely on this internal representation and pro-
vide keywords for use in rules. Overview of the Suricata’s
architecture is shown in Figure 6.
Packet Capture
Decoder 1
Decoder 2
Packet Decoding
Detection 1
Detection 2
Figure 6: Suricata architecture.
Each packet is first processed in decoding functions and
then in detection modules. Decoding functions read the
packet and save the decoded data into an internal repre-
sentation of the packet. The decoding functions are called
one at a time on the packet. Extending the decoding func-
tionality is possible by implementing a new decoding func-
tion and placing it into the decoding pipeline. The decoding
pipeline starts with the source of captured packets, then L2
is decoded, and then protocols on higher layers are decoded.
Upon decoding, the packets pass detection. The detec-
tion is governed by rules and depends on the decoding step.
The rules are matched with the internal packet representa-
tion. The matching process is broken into several detection
modules in all of which the matching takes place. Unlike
decoding, detection is parallelized and one packet can be
processed in multiple detection modules at the same time.
Extending the detection functionality is possible by imple-
menting a new detection module and registering it in the
table of detection methods.
Suricata is written in C and the modules for Suricata have
to be written in C. There are no plans supporting C++. C
requires greater programming expertise than the Bro lan-
guage. Therefore, this property makes Suricata not the best
prototyping tool available.
Suricata is multithreaded out of the box. Even though it
is not as fast as Snort on a single-CPU computer, Suricata
is designed to scale on computers with tens of CPUs [24].
The multithreading approach is different from Snort. Multi-
threaded Snort setups divide the monitored traffic by flows
into multiple parts, each processed by an individual Snort
instance. Suricata, on the other hand, does not require the
traffic balancing since it manages multithreading itself. This
approach makes it more user-friendly.
2.2.3 Bro
Bro [29] is a network security monitor performing deep
packet inspection using event-based analysis. In contrast to
Snort and Suricata, Bro is primarily not rule-driven. In-
stead, it implements a Turing-complete scripting environ-
ment [4]. Rule-based detection as well as arbitrary detection
algorithms can be implemented in this environment. Bro de-
tection rules are described by scripts. Figure 7 shows Bro
NIC Packet
Figure 7: Bro architecture.
The Bro’s scripting environment uses the Bro program-
ming language. It is an interpreted, typed language. What
makes it special are domain-specific types. For example, the
addr type holds an IP address [16]. Variables of structured
types are reference type variables. This makes processing
of large sets or tables efficient, since only the references are
copied, not the data itself. There are two types of collec-
tions, sets and tables. Loops are available in the form of
iteration through collections. The Bro programming lan-
guage lacks other forms of loop control, presumably serving
as a deterrent against overly complex algorithms. This is a
reasonable requirement for network traffic monitoring when
the processing is done in real time. And that exactly is the
most significant goal of Bro, to allow real-time network traf-
fic analysis and save already processed results to log files.
The default installation of Bro contains many scripts im-
plementing various sorts of traffic analysis. Some of the
items the default Bro setup monitors are: Bidirectional
flows, DHCP leases, DNS queries and responses, MD5 and
SHA1 hashes of files transmitted over unencrypted proto-
cols, HTTP requests and user agents, port scans, email head-
ers from SMTP traffic, successful and unsuccessful SSH con-
nections, SSL certificates, SYSLOG messages, traffic tun-
Since the preinstalled scripts usually expose an API in the
form of events, they can be used by user scripts, extending
the default functionality.
The core of Bro, implemented in C, processes network
traffic, performs DPI and generates events about what is
happening in the traffic. Events generated by the core are
listed in the bif files [1]. Many events are generated, span-
ning L2 through L7. Examples are a new ARP packet, closed
TCP connection, HTTP request, etc. In other words, this
type of DPI performs semantic matching of network events
instead of simple pattern matching, as opposed to Snort and
Suricata. Majority of the events provide context, typically
in the form of information about the relevant connection.
The events are then processed by the Bro scripts.
Bro scripts use so-called event handlers to listen to the
events. The usual reactions to events vary. On the one hand,
the simplest possible processing saves the event information
to a log file. On the other hand, some scripts implement
fairly complex processing and generate additional types of
events. This further extends DPI abilities of Bro. Scripts
can handle events generated both by the core and by other
scripts. Figure 8 shows a very short module that just writes
Hello world!” to the standard output when Bro starts.
module helloworld;
event bro_init() {
print "Hello world!";
Figure 8: A simple Hello world! script.
The scripting engine hosts the scripts and dispatches
events generated both by the scripts and the core to the
scripts listening to these events. It also allows operations
like file access and execution of applications native to the
operating system. This functionality can be used by ad-
vanced scripts. File access may be used to fetch information
from external sources, e.g., a blacklist. Execution facility
may be used for many purposes. One example is reporting
issues to a ticket managing software via email. The sendmail
executable can be used by such a script. Another example
is automatic triggering of a remotely triggered black hole by
executing a program that does the blackholing.
Bro scripts are organized in so-called modules. A mod-
ule can be implemented wholly in one file or can be
broken into several files. Two identifiers with the same
name in two different modules do not collide with each
other. Cross-module references can be made using the name
name of module::name of identifier.
A module can define types, variables, functions, and event
handlers. These entities can be either local to the module
or globally accessible from other modules.
It is possible to define custom types using enum,set,table,
vector, and record.Enum in Bro is similar to enum in other
languages. Set is similar to HashSet<T>in C# in its func-
tionality [9], albeit the syntax is different. Table is similar
to Dictionary<TKey,TValue>in C# [8] with the difference
that C# allows only one key while Bro allows multiple keys.
Vector is a table indexed by count. Count is the name for int
in Bro. Record is similar to C# class [6] that contains only
fields [7]. Both Bro record and C# class are reference types,
meaning assignment of its instance copies only the reference
(pointer), not the whole instance. This can be compared to
C# struct which is a value type, meaning assignment of its
instance copies the whole instance.
Bro can be run both as a single-threaded application
and as a multithreaded distributed application. The single-
threaded mode is called standalone while the multithreaded
one is called cluster. If Bro is used as a platform for devel-
opment of proof-of-concept methods, the standalone mode
is usually more appropriate than the cluster mode. Devel-
opment for the cluster mode is more difficult than for the
standalone mode because additional functionality has to be
used by scripts [3].
2.3 Flow-based Observation Representatives
Flow-based observation architecture contains two main
components; a flow exporter and a flow collector. This sec-
tion covers representative implementations of both flow ex-
porters and flow collectors.
2.3.1 Flow Exporters
nProbe [20] is a commercial open-source flow exporter.
Data can be exported in NetFlow v5, NetFlow v9, and IP-
FIX formats. nProbe has an application visibility (nDPI)
ability, which is used for detection of application-specific
protocols. This information is saved in a custom column
in NetFlow v9 or IPFIX format. It is difficult to obtain
nProbe source code for free.
YAF [19] is an open-source flow exporter. Data are ex-
ported in the IPFIX format. A passive OS fingerprinting
functionality based on the p0f software can be compiled into
YAF. YAF supports modules that implement DPI. However,
YAF does not provide DPI in default setup.
QoF [31] is a fork of YAF. It removes all payload inspec-
tion abilities and instead focuses on passive performance
ipt-netflow [10] is a plugin for iptables for flow export.
Data can be exported in NetFlow v5, NetFlow v9, and IP-
FIX formats. There is no special functionality besides stan-
dard network flows. There is also no apparent focus towards
high-throughput networks. ipt-netflow is open-source.
pmacct [27] is an open-source flow exporter and flow col-
lector. Data can be exported in NetFlow v5, NetFlow v9,
sFlow v5, and IPFIX formats. Supports high-throughput
networks using PF RING. No DPI-related functionality is
available in pmacct.
softflowd [13] is an open-source flow exporter performing
export to NetFlow v1, v5, and v9 formats. There is no ap-
parent effort to provide anything on top of regular NetFlow
data export.
2.3.2 Flow Collectors
nProbe is not only a flow exporter, it is also a flow collec-
tor. Available storage backends are MySQL, SQLite, text
files, and binary files. The nProbe flow collector was created
because its author deemed other collectors available at the
time to be too cumbersome to use.
IPFIXcol [32] is an IPFIX collector designed for high-
throughput networks. IPFIXcol claims to be flexible. Stor-
age backend can be customized using output plugins. IP-
FIXcol also allows implementation of so-called IPFIX medi-
ators, used for processing of the collected data before it hits
the collector.
flowd [5] is a NetFlow v1, v5, v7, and v9 collector. It
is created under the UNIX philosophy to do just one thing.
The collected data are saved in a binary format. flowd is pro-
vided with Perl and Python interfaces for reading the binary
data. flowd strives for security using privilege separation of
components. flowd is open-source and freely available.
nfdump [11] consists of several tools. The nfcapd tool
listens to NetFlow v5, v7, v9 streams and saves them to
nfcap files. The nfdump tool can be used for analysis of
nfcap files. nfdump uses similar filter syntax to tcpdump.
nfdump is open-source and freely available.
pmacct [26] as a collector has several storage backends
available. It can use MySQL, PostgreSQL, SQLite, Mon-
goDB, BerkeleyDB, and flat files. Among other formats, it
can collect NetFlow v1-v9 and IPFIX.
SiLK [23] is a collector for NetFlow v5, v9, and IPFIX
data. It is designed for high-throughput networks. SiLK
consists of multiple tools and plugins for filtering, analysis,
and processing of flow data.
With respect to the selection of network traffic monitor
suitable for DPI, the following criteria have been evaluated
for each mentioned traffic monitor:
Prototyping: Is the network traffic monitor suitable for
creation of method prototypes?
Developer-friendliness: Does the network traffic mon-
itor allow development of new traffic analysis methods
in an easy to use way?
Extensibility: Is it possible to extend the existing func-
tionality of the network traffic monitor in a reasonable
way? What programming language does the API use?
The descriptions of individual network traffic monitors in
this paper indicate answers to these criteria. Table 1 shows
the summary.
Table 1: Bro Is The Suitable Tool For Creation Of
Monitor Prototyping Developer
friendliness Extensibility
Tcpdump No No No API
Wireshark No No No API
Snort No No C language
Suricata No No C language
Bro Yes Yes Bro language
There are several different approaches to network moni-
toring. Each approach is the best fit for a different purpose.
Wireshark is good for manual analysis, predominantly of
small capture files. Tcpdump is packet-oriented and works
well in those use cases where filtering individual packets by
L3/L4 attributes like IP address, TCP flags, payload bytes,
etc. is sufficient. It does not work well for stream reassem-
bly or L7 protocol analysis. Snort and Suricata work well
when the goal is to match patterns in network data. Bro
allows development of advanced detection methods.
Bro is the best software/environment for development of
novel detection or processing techniques. It can be used for
continuous monitoring of high-throughput networks. The
scripting environment is extensible in a memory-safe lan-
guage specialized in network data processing. It is not con-
strained by belonging to a single paradigm for network mon-
itoring like the other tools. Unfamiliarity is a disadvantage,
compared to more known tools like wireshark, tshark, snort
and suricata.
[1] All bro scripts.
Accessed: 12-01-2016.
[2] Anomalydetection: Home -,1.
Accessed: 12-01-2016.
[3] base/frameworks/cluster/main.bro.
frameworks/cluster/main.html. Accessed: 12-01-2016.
[4] The bro network security monitor.
Accessed: 12-01-2016.
[5] flowd - small, fast and secure netflow collector. Accessed:
[6] Microsoft: Classes and structs (c# programming
guide). http:
Accessed: 12-01-2016.
[7] Microsoft: Classes and structs (c# programming
guide). http:
Accessed: 12-01-2016.
[8] Microsoft: Dictionary<tkey, tvalue>class.\
%28v=vs.110\%29.aspx. Accessed: 12-01-2016.
[9] Microsoft: Hashset<t>class. http:
Accessed: 12-01-2016.
[10] Netflow iptables module.
Accessed: 12-01-2016.
[11] Nfdump. Accessed:
[12] Snort syntax and simple rulewriting.
Accessed: 12-01-2016.
[13] softflowd - a software netflow probe. Accessed:
[14] Suricata rules. https://redmine.openinfosecfoundation.
org/projects/suricata/wiki/Suricata Rules. Accessed:
[15] suricata/src at master.
Accessed: 12-01-2016.
[16] Types and attributes - bro 2.2 documentation. Accessed:
[17] Writing good rules.
html#testing numerical values. Accessed: 12-01-2016.
[18] S. Biles. Detecting the unknown with snort and the
statistical packet anomaly detection engine (spade).˜pld/courses/447/sum08/
class6/biles.spade.pdf. Accessed: 12-01-2016.
[19] B. T. Christopher Inacio. Yaf: Yet another flowmeter. Accessed:
[20] L. Deri and N. SpA. nprobe: an open source netflow
probe for gigabit networks. In TERENA Networking
Conference, 2003.
[21] J. Esler. Offset, depth, distance, and within.
offset-depthdistance-and-within.html. Accessed:
[22] J. Esler. Preprocessors. Accessed:
[23] C. Gates, M. P. Collins, M. Duggan, A. Kompanek,
and M. Thomas. More netflow tools for performance
and security. In LISA, volume 4, pages 121–132, 2004.
[24] V. Julien. On suricata performance. http://blog.
Accessed: 12-01-2016.
[25] J. Keuter. Privilege separation. http://wiki.wireshark.
org/Development/PrivilegeSeparation. Accessed:
[26] P. Lucente. pmacct project: Ip accounting iconoclasm. Accessed: 12-01-2016.
[27] P. Lucente. pmacct: steps forward interface counters.
Accessed: 12-01-2016.
[28] M. Mahoney. Network anomaly intrusion detection
research at florida tech.˜mmahoney/dist/. Accessed:
[29] V. Paxson. Bro: a system for detecting network
intruders in real-time. Computer networks,
31(23):2435–2463, 1999.
[30] J. Svoboda, I. Ghafir, and V. Prenosil. Network
monitoring approaches: An overview. In Proceedings of
International Conference on Advances in Computing,
Communication and Information Technology.
Birmingham, UK, 2015. ISBN: 978-1-63248-061-3.
[31] B. Trammell. Yaf-derived flow meter for passive
performance measurement. Accessed:
[32] P. Velan and R. Krejˇc´ı. Flow information storage
assessment using ipfixcol. In Dependable Networks and
Services, pages 155–158. Springer, 2012.
... Once the target is selected, the attacker creates a Point of Entry (PoE), and once inside the targeted network a communication channel with the attacker should be established, so the rest of the attack can continue with no interference. This initial stage of the attack typically includes an initial dropper file, which can contain any type of malware that has the main purpose to download another file from the Internet, which will be useful to continue with the rest of the attack [2]. ...
... An interesting detection method is the Intrusion Kill Chain (IKC) model [2]. This method facilitates the identification of multi-stage attacks by following the IKC seven-phase model that an attacker generally follows to carry out an attack. ...
... So, by following the IKC model, we could be able to break the attack by interfering any of the seven phases. Breaking the attack at an early stage can stop the multi-stage on time [2]. ...
... Some of these threats are significantly unpredictable, as various malicious agents can exploit several vulnerabilities before compromising a prized asset within the heterogeneous system, especially in IoT systems. To develop dependable IoT-Driven applications, the security attribute of the system must be well considered from the design stage of a system [26,56]. This effort is necessary to ensure that the systems are guarded against the exploitation of intended malicious agents from compromising the system's CIA and other security attributes [57]. ...
Full-text available
The rapid progress of the Internet of Things (IoT) has continued to offer humanity numerous benefits, including many security and safety-critical applications. However, unlocking the full potential of IoT applications, especially in high-consequence domains, requires the assurance that IoT devices will not constitute risk hazards to the users or the environment. To design safe, secure, and reliable IoT systems, numerous frameworks have been proposed to analyse the safety and security, among other properties. This paper reviews some of the prominent classical and model-based system engineering (MBSE) approaches for IoT systems' safety and security analysis. The review established that most analysis frameworks are based on classical manual approaches, which independently evaluate the two properties. The manual frameworks tend to inherit the natural limitations of informal system modelling, such as human error, a cumbersome processes, time consumption, and a lack of support for reusability. Model-based approaches have been incorporated into the safety and security analysis process to simplify the analysis process and improve the system design's efficiency and manageability. Conversely, the existing MBSE safety and security analysis approaches in the IoT environment are still in their infancy. The limited number of proposed MBSE approaches have only considered limited and simple scenarios, which are yet to adequately evaluate the complex interactions between the two properties in the IoT domain. The findings of this survey are that the existing methods have not adequately addressed the analysis of safety/security interdependencies, detailed cyber security quantification analysis, and the unified treatment of safety and security properties. The existing classical and MBSE frameworks' limitations obviously create gaps for a meaningful assessment of IoT dependability. To address some of the gaps, we proposed a possible research direction for developing a novel MBSE approach for the IoT domain's safety and security coanalysis framework.
... According to the study of Ghafir and Prenosil [19], network monitoring is a set of tools that allows network administrators to keep track of the current state and longterm trends of a complex computer network. This study discusses the current state of network monitoring. ...
... Originally called 'hidden services', websites that are exclusive to Tor networks and have a top level domain of '.onion' are now called onion services [23]. These are websites that are made anonymously and can only be accessed via a link provided from the website's host [12]. Onion services are private servers that allow two-way anonymity. ...
... It has been widely used in traffic management. [2], security monitoring [3], intelligent transportation systems [4], robot navigation [5], auto pilot [6], and video surveillance [7]. ...
Full-text available
This study proposes a visual tracking system that can detect and track multiple fast-moving appearance-varying targets simultaneously with 500 fps image processing. The system comprises a high-speed camera and a pan-tilt galvanometer system, which can rapidly generate large-scale high-definition images of the wide monitored area. We developed a CNN-based hybrid tracking algorithm that can robustly track multiple high-speed moving objects simultaneously. Experimental results demonstrate that our system can track up to three moving objects with velocities lower than 30 m per second simultaneously within an 8-m range. The effectiveness of our system was demonstrated through several experiments conducted on simultaneous zoom shooting of multiple moving objects (persons and bottles) in a natural outdoor scene. Moreover, our system demonstrates high robustness to target loss and crossing situations.
... The only way to stay ahead of new vulnerabilities and attacks is through vivid detection and response [3]. Unfortunately, constant security monitoring is a key component missing in most networks [4,5]. ...
Full-text available
Intrusion detection system (IDS) is one of the most important components being used to monitor network for possible cyber-attacks. However, the amount of data that should be inspected imposes a great challenge to IDSs. With recent emerge of various big data technologies, there are ways for overcoming the problem of the increased amount of data. Nevertheless, some of this technologies inherit data distribution techniques that can be a problem when splitting a sensitive data such as network data frames across a cluster nodes. The goal of this paper is design and implementation of Hadoop based IDS. In this paper we propose different input split techniques suitable for network data distribution across cloud nodes and test the performances of their Apache Hadoop implementation. Four different data split techniques will be proposed and analysed. The techniques will be described in detail. The system will be evaluated on Apache Hadoop cluster with 17 slave nodes. We will show that processing speed can differ for more than 30% depending on chosen input split design strategy. Additionally, we?ll show that malicious level of network traffic can slow down the processing time, in our case, for nearly 20%. The scalability of the system will al so be discussed.
Conference Paper
Software-Defined Networking (SDN) and virtual environment raise new challenges for network monitoring tools. The dynamic and flexible nature of these network technologies requires adaptation of monitoring infrastructure to overcome challenges of analysis and interpretability of the monitored network traffic. This paper describes a concept of automatic on-demand deployment of monitoring probes and correlation of network data with infrastructure state and configuration in time. Such an approach to monitoring SDN & virtual networks is usable in several use cases, such as IoT networks and anomaly detection. It increases visibility into complex and dynamic networks. Additionally, it can help with the creation of well-annotated datasets that are essential for any further research.
Full-text available
The recent COVID-19 pandemic has showcased the implications of using virtual private networks (VPNs) as home-working is now common. Establishing the current state of knowledge on VPNs and their processes is vital. This corroborates an up to date understanding on the fundamentals of VPNs and their usage in today’s society; it informs on how to better use VPNs too. Insight into the security issues VPNs face and possible solutions to these allows for the identification of gaps for potential future research. Addressing these gaps would then indicate how to further improve VPNs, making sure VPNs are indeed beneficial to users.
Technical Report
Full-text available
This report presents a deployable solution to improve the cybersecurity situational awareness of the legacy SCADA system infrastructure in power grids. The main goal of this project is to provide system owners and operators a highly trusted, intelligent alarm system and comprehensive situational awareness of ongoing or potential cybersecurity threats on the grid network. The key contributions of this project include: (1) the development of software, the Intrusion Detection Visualizer for the Operational Technology Network (IViz-OT), to visualize and locate intrusions on the grid network; (2) testing the signature-based Hybrid Intrusion Detection for Energy Systems (HIDES) (Singh et al. 2020) for different types of intrusions; (3) the integration of HIDES and IViz-OT into the visualization dashboard; and (4) real-time testing using a hardware-in-the-loop test bed.
Full-text available
Network monitoring and measurement have become more and more important in a modern complicated network. In the past, administrators might only monitor a few network devices or less than a hundred computers. The network bandwidth may be just 10 or 100 Mbps; however, now administrators have to deal with not only higher speed wired network (more than 10 Gbps and ATM (Asynchronous Transfer Mode) network) but also wireless networks. Network administrators are constantly striving to maintain smooth operation of their networks. Network monitoring is a set of mechanisms that allows network administrators to know instantaneous state and long-term trends of a complex computer network. This paper provides the readers with an overview of the current network monitoring approaches, their architectures, features and properties. In addition, it presents a comparison between those approaches.
Full-text available
The number and types of attacks against networked computer systems have raised the importance of network security. Today, network administrators need to be able to investigate and analyse the network traffic to understand what is happening and to deploy immediate response in case of an identified attack. Wireshark proves to be an effective open source tool in the study of network packets and their behaviour. In this regard, Wireshark can be used in identifying and categorising various types of attack signatures. The purpose of this paper is to demonstrate how Wireshark is applied in network protocol diagnosis and can be used to discover traditional network attacks such as port scanning, covert FTP and IRC channels, ICMP-based attacks, BitTorrent-driven denial service, and etc. In addition, the case studies in this paper illustrate the idea of using Wireshark to identify new attack vectors.
Full-text available
Flow monitoring has become a prevalent method for monitoring traffic in high-speed networks. By focusing on the analysis of flows, rather than individual packets, it is often said to be more scalable than traditional packet-based traffic analysis. Flow monitoring embraces the complete chain of packet observation, flow export using protocols such as NetFlow and IPFIX, data collection, and data analysis. In contrast to what is often assumed, all stages of flow monitoring are closely intertwined. Each of these stages therefore has to be thoroughly understood, before being able to perform sound flow measurements. Otherwise, flow data artifacts and data loss can be the consequence, potentially without being observed. This paper is the first of its kind to provide an integrated tutorial on all stages of a flow monitoring setup. As shown throughout this paper, flow monitoring has evolved from the early 1990s into a powerful tool, and additional functionality will certainly be added in the future. We show, for example, how the previously opposing approaches of deep packet inspection and flow monitoring have been united into novel monitoring approaches.
Middleboxes play a major role in contemporary networks, as forwarding packets is often not enough to meet operator demands, and other functionalities (such as security, QoS/QoE provisioning, and load balancing) are required. Traffic is usually routed through a sequence of such middleboxes, which either reside across the network or in a single, consolidated location. Although middleboxes provide a vast range of different capabilities, there are components that are shared among many of them. A task common to almost all middleboxes that deal with L7 protocols is Deep Packet Inspection (DPI). Today, traffic is inspected from scratch by all the middleboxes on its route. In this paper, we propose to treat DPI as a service to the middleboxes, implying that traffic should be scanned only once, but against the data of all middleboxes that use the service. The DPI service then passes the scan results to the appropriate middleboxes. Having DPI as a service has significant advantages in performance, scalability, robustness, and as a catalyst for innovation in the middlebox domain. Moreover, technologies and solutions for current Software Defined Networks (SDN) (e.g., SIMPLE [41]) make it feasible to implement such a service and route traffic to and from its instances.
Users’ demands have dramatically increased due to widespread availability of broadband access and new Internet avenues for accessing, sharing and working with information. In response, operators have upgraded their infrastructures to survive in a market as mature as the current Internet. This has meant that most network processing tasks (e.g., routing, anomaly detection, monitoring) must deal with challenging rates, challenges traditionally accomplished by specialized hardware — e.g., FPGA. However, such approaches lack either flexibility or extensibility —or both. As an alternative, the research community has proposed the utilization of commodity hardware providing flexible and extensible cost-aware solutions, thus entailing lower operational and capital expenditure investments. In this scenario, we explain how the arrival of commodity packet engines has revolutionized the development of traffic processing tasks. Thanks to the optimization of both NIC drivers and standard network stacks and by exploiting concepts such as parallelism and memory affinity, impressive packet capture rates can be achieved in hardware valued at a few thousand dollars. This tutorial explains the foundation of this new paradigm, i.e., the knowledge required to capture packets at multi-Gb/s rates on commodity hardware. Furthermore, we thoroughly explain and empirically compare current proposals, and importantly explain how apply such proposals with a number of code examples. Finally, we review successful use cases of applications developed over these novel engines.
Conference Paper
Network monitoring has became a significant part of network management. Each environment and type of network have their specific, different needs. To allow network traffic monitoring in various environments, a necessity of flexible approach thus grows. The current generation of flow collectors provides only a limited flexibility, mainly due to limits of their data storage formats. Moreover, it is quite a challenging task to compare particular storage formats and their suitability for the specific environment and usage. In this paper we present IPFIXcol --- a flow collector framework designed for easy data storage formats changing. This way, we plan to evaluate performance and suitability of various data storage formats for specific tasks. Results can be used to build the most appropriate data storage for the specific production environments.