Content uploaded by Maxim Kolomeets
Author content
All content in this area was uploaded by Maxim Kolomeets on Jun 22, 2022
Content may be subject to copyright.
XXX-X-XXXX-XXXX-X/XX/$XX.00 ©20XX IEEE
Augmented reality for visualizing security data
for cybernetic and cyberphysical systems
Abstract— The paper discusses the use of virtual and
augmented reality for visual analytics in information security.
We asks two questions: “In which areas of information security
visualization AR can be useful?” and “What is the difference
from similar methods of visualization at the level of perception of
information?”. The paper answers the first question with the
investigation of information security areas and visualization
models that can be used in VR/AR security visualization. The
paper answers the second question with the user-based
experiments that evaluate perception of visual components in
VR. Based on the experiments the classes of visual components
with different level of effectivity in VR/AR are presented.
Keywords— virtual reality, augmented reality, information
security, data visualization, visualization evaluation.
I. INTRODUCTION
The analysis of information security events occurs mainly
visually. Events and incidents are displayed using various
visualization models. Based on visual analysis, the user decides
on the choice of the necessary countermeasures.
Along with the increase in the volume of data being
analyzed and the complexity of the architectures of computer
networks and systems, the complexity of the necessary
visualization models is growing. One of the possible solutions
may be to use 3D models of visualization in virtual /
augmented reality, along with 2D models. However, the
interfaces of virtual / augmented reality are still little studied,
so there is a need to study the perception of the components of
three-dimensional models of visualization by users and their
compliance with information security metrics.
The main contribution of the paper is as follows. First, the
areas of information security where VR/AR visualization can
be useful are given. Second, criteria for the effectiveness of the
perception of visualization of metrics by users are proposed.
Third is visualization components effectivity classification that
is based on user-based experiments in VR/AR metrics
perception.
The paper has the following structure. The analysis of
related work is provided in section II. Section III is devoted to
the consideration of information security areas that can use AR
and VR visualization. Visualization perception tests, and visual
components effectivity classification that are based on the
results of these tests are described in section V. Section VI
describes the main conclusions and formulates the directions of
future research.
II. RELATED WORKS
Augmented reality is a relatively new direction. For a long
time, AR devices were unavailable due to their professional
orientation (pilots, racers, etc.) or their high cost, which
significantly limited their widespread use. The following
augmented reality devices are currently available: Google
Glass (2014), Microsoft HoloLens (2016), MetaVision (2017),
Magic Leap (2018). Separately, there exist AR based on
mobile devices without the possibility of human-machine
interaction for data management: Apple AR Kit (2017) and
Google AR Core (2017).
Nevertheless, a number of studies have already been
undertaken in the field of the possibility of visualization
through virtual reality as the closest technology to augmented
reality. So, a group of South Korean and American scientists
investigated ways of visualizing graph structures in various
types of spheres in virtual reality [1]. Ana Asnes Becker, Data
Journalist of The Wall Street Journal, visualized in virtual
reality the history of NASDAQ quotes [2]. Michal Koutek and
Frits Post developed the MolDRIVE visualization system,
which allows visualization and control of the molecular
dynamics experiment [3]. In 2017, the RSAC presented
opportunities for using virtual reality in cybersecurity by
companies such as Landrian Networks and ForeScout.
Landrian Networks provided insights on the use of virtual
reality in the work of situational security centers [4], and
ForeScout - in promoting the security of the Internet of
things [5]. Bob Levy introduced the Virtual Cove project,
which visualizes stock indices in augmented reality [6]. E-
Semble bv is developing emergency simulation programs to
train qualified personnel. Brown University (Providence, USA)
uses virtual reality for various scientific experiments and
training in psychology, surgery, geology, bioengineering and
other fields [7]. The Engenharia Nuclear Institute (Rio de
Janeiro, Brazil) is exploring the possibilities of using virtual
reality to ensure the physical security of nuclear facilities [8].
At the St. Petersburg University of Information Technologies,
Mechanics, and Optics (ITMO), cognitive visualization
technologies for temporal integrated networks and situational
preprint
PDP 2020 conf.
Preprint: https://www.researchgate.net/publication/341402453_Augmented_reality_for_visualizing_security_data_for_cybernetic_and_cyberphysical_systems
Published version: https://ieeexplore.ieee.org/document/9092352
omitted for blind review
awareness management during global mass events were
proposed.
In general, studies on visualization through augmented
reality are just beginning, and no large studies have been
conducted in this area. Moreover, global comprehensive
studies of augmented reality were not conducted not only in the
field of visualization of security data, but also in the field of
data visualization as part of Data Science. This leads us to a
misunderstanding of how AR can be used in security.
Thus, there is a necessary basis for the development of
security data visualization systems in augmented reality,
however, the research itself in this area is at an initial level.
There are two main problems facing developers of AR in
security:
1) In which areas of information security visualization
AR can be useful?
2) What is the difference from similar methods of
visualization at the level of perception of information?
III. AR IN SECURITY
In order to determine the possibilities of using AR for
solving information security problems, a study was carried out
of various areas of information protection for what support and
decision-making processes exist in them, as well as how data
visualization is applied in them.
Three classes of areas of information security were studied:
physical security, cybersecurity and cyberphysical security.
For each information security process studied, the
visualization models used and what tasks they solve within the
framework of a specific process were determined.
Physical Security:
1. Organization and certification of security of buildings
and areas – development of projects to ensure the protection of
premises (installation of sensors, development of a plan of
premises, furniture arrangement, etc.), certification of
compliance of a room with a certain security class.
Visualization models: room graphs [9], visualization of
physical objects and processes (for example, camera viewing
angles), device dependency graphs [10].
2. Personnel training - monitoring staff awareness of
actual security policies.
Visualization models: bar charts for comparing personnel
indicators [11], line charts, pie charts [14].
3. Incident monitoring and counteraction - monitoring
the controlled area, controlling access of people and vehicles,
monitoring events and incidents from physical security
systems, developing countermeasures and giving instructions
to the systems.
Visualization models: maps of the controlled area, Voronoi
maps [12], graphs of transitions between areas (where the edge
is Access Control Systems), linear / pie / bar charts of
indicators of the transport and personnel.
4. Monitoring of incidents and counteraction within the
framework of urban situational centers - countering terrorist,
man-made and natural threats, as well as eliminating the
consequences of disasters by developing countermeasures and
giving orders to services.
Visualization models: city maps with overlapping bubble /
linear / pie / bar charts of statistical data [11, 14], population
movement graphs [13].
Cybersecurity:
1. Risk assessment - asset assessment, threat assessment,
attack route prediction, countermeasures selection and returns
of investments calculations.
Visualization models: service dependency graphs [10], tree
maps of risk [10], attack route matrices [16], radial attack trees
[15], linear/pie/bar charts of asset metrics, geometric
countermeasure visualizations [10].
2. Network processes - active and passive traffic
analysis, firewall monitoring, network topology status
monitoring (access rights, traffic flows, connected devices,
etc.).
Visualization models: linear/pie/bar charts of traffic
statistics [17], scatter plots, parallel coordinates and hive plots
[24], network and flow graphs [14], circle packing [18], trees
and radial trees for hierarchical networks, traffic flow chord
diagrams, access control matrices [19].
3. Information leaks - monitoring the actions of
employees, searching for employees at risk, tracking insiders to
gather evidence.
Visualization models: linear/pie/bar charts of statistical data
on events and risks, word clouds, interval graphs [11].
4. Social networks - identifying opinion leaders,
identifying destructive communities (e.g. HIV dissident,
destructive sects, popularizing homeopathy, etc.), identifying
illegal content, monitoring the dissemination and attenuation of
information.
Visualization models: linear/pie/bar charts of statistical
data, user and content dependency graphs, repost trees [20].
5. Forensics - restoring the sequence of committing a
cybercrime, collecting evidence, presenting evidence in court.
Visualization models: linear/pie/bar charts of time data
[17], visualization models depending on the type of forensics
(if the analysis is within the framework of network forensics,
models relevant to the class of network processes can be used).
6. Anti-malware - development of signature and
proactive methods for detecting malware, structural analysis of
executable files.
Visualization models: linear/pie/bar charts of statistical data
[14], graphs of code blocks (for example, in the IDA
decompiler) [21].
Cyber Physical Security:
1. Built-in devices and the Internet of things - self-
organization of networks, monitoring the status of cyber-
physical networks, monitoring of individual devices of the
Internet of things.
Visualization models: graphs of networks of embedded
devices, Voronoi maps, tree maps [12], trees and radial trees
for hierarchical networks, Voronoi maps for sensor coverage
areas, linear/pie/bar charts of device parameters [11,14].
2. Cyber-physical access control - monitoring the
movement of users indoors and buildings, user authorization on
devices, security models for access control.
Visualization models: graphs of employee premises and
movements [22], triangular coordinates [23], room maps,
Voronoi maps, access matrices, parallel coordinates of access,
triangular matrices of access [19].
3. Robotic systems - monitoring the status of smart
vehicles.
Visualization models: maps of the area and premises [22],
graphs of drone networks, linear/pie/bar charts of CAN BUS
traffic parameters [25].
The list of visualization models used to ensure security is as
follows: graphs, matrices, triangular matrices, Chord diagrams,
interval graphs, trees, radial trees, tree maps, circle packing,
Voronoi maps, bar graphs, line/pie/bar/bubble charts,
geometric visualizations, scatter plots, parallel coordinates,
hive plots, word clouds, triangular coordinates, room maps,
terrain maps, physical objects.
Thus, the general structures and characteristics of security
data were obtained, which will allow one to formulate
approaches to data visualization in augmented reality for
extensive classes of tasks and to build universal visualization
models.
IV. PERCEPTION IN VR – EXPERIMENTAL EVALUATION
In order to develop effective methods of data visualization,
it is necessary to determine the effectiveness of the components
that make up the image. User perceptions of images in 2D and
3D vary significantly. The main differences are as follows:
(1) In 3D, the user perceives space. Due to what the
concept of size, relative position changes, the dependence of
color on lighting appears.
(2) In the 3D mode with 6 degrees of freedom, the user
begins to perceive himself as an active observer who can move
between objects and observe them from different sides.
(3) In 3D mode with basic interaction capabilities, the user
begins to perceive virtual objects as real physical objects and
gives them the properties of physical objects.
Thus, the basis of the idea and the reason for the need to
study cognitive perception in 3D lies in the fact that the
effectiveness of information interpretation is changing.
Accordingly, there is a need to study the perception efficiency
of the basic components of visualization: size, color,
transparency, etc.
In order to numerically express (and be able to compare)
the perception efficiency of various components, experiments
were conducted. Below are summarized the following key
points in their implementation.
Determination of effectiveness
The effectiveness of visualization is difficult to evaluate
and formalize. To determine the effectiveness, we proceeded
from two basic requirements for support and decision-making
systems: accuracy of information interpretation and speed of
decision-making.
The process of visualization and decision making in general
terms is as follows. The metric (for example, number 15) is
converted to a graphic component (for example, a ball with
volume = 15). The user examines the ball and tries to interpret
its size. He gives an answer (e.g. 14) and spends some time
making a decision (e.g. 10 seconds). Thus, effectiveness is
expressed in the speed of interpretation of the metric and
accuracy. Values are determined by the upper quantile Q3.
Their comparison for various components allows us to identify
more effective and less effective components of visualization
in 3D.
We highlight six classes of efficiency - 3 in accuracy and 3
in speed.
By accuracy:
1) Accurate - Graphic components recommended for
visualization of accurate metrics. Accurate metrics are metrics
that must convey the exact number, for example, the number of
vulnerabilities or the number of employees.
2) With errors - graphic components recommended for
use for visualizing inaccurate metrics. Inaccurate metrics are
metrics whose overly accurate interpretation may affect
decision making, although it should not. For example, the
probability of an attack or the average value of criticality.
3) Inaccurate - graphic components that give a large
error. Not recommended for use.
By speed:
1) Fast - recommended graphics components for fast
decision making.
2) Acceptable - Graphic components that are acceptable
for use for fast decision making.
3) Slow - graphical components not recommended for
quick decision making.
Thus, perception efficiency was determined by classifying
the upper quantile of the distribution density of two main
parameters, which can be used to judge the degree of
interpretation success: accuracy and speed. Speed - the
difference between the time the task completed and its start.
Accuracy is the ratio of the normalized interpreted value to the
normalized correct value.
Test Objects
We distinguish two types of visualization components with
respect to metrics: quantitative and categorical.
Quantitative imaging techniques can visualize numerical
values. Thus, two visualization objects can be compared with
each other. For example (size perception testing): “how many
times is the first ball larger than the second?”
Categorical visualization methods cannot do this. For
example, one cannot ask the question (shape perception
testing): “how many times is one object more triangular than
the second?”. But they can visualize categories (shape
perception testing): “how many triangles do you see among the
balls?”.
It should be noted that quantitative methods can also
visualize categorical concepts (size perception testing): “how
many large and small balls do you see?”.
We identified visualization components from Christian
Leborg's book Visual Grammar. Only those components that
can visualize metrics were taken. Then we divided the metrics
into two classes. The class of quantitative metrics includes size
(volume in 3D), color hue, color saturation, transparency, cubic
coordinate system, radial (cylindrical in 3D) coordinate system,
rotation, scaling, movement. The class of categorical metrics
includes form, primary color, texture.
Types of tests
Two types of tests were identified.
Test A is quantitative. The user was presented with two
objects that are identical to each other in everything except the
tested visualization component. For example, two static cubes,
the same size, shape, texture and transparency, but with
different shades of color. It was necessary to determine “how
many times the first object is more X than the second”, where
X is large, transparent, blue, saturated, etc.
Test B is categorical. The user was presented with 32
objects that are identical to each other in everything except the
tested visualization component. Among 32 objects there were 4
categories of objects. For example, 32 objects with 4 different
shapes. It was necessary to determine “how many objects X
and Y are together”, where X and Y are triangles, striped
objects, rotating, blue, etc.
For each quantitative component, tests A and B were
carried out. For the categorical one, only B. For each test, there
were 3 different combinations, so for one user three
measurements of perception of one component were made.
Test execution
Tests were conducted at [omitted for blind review] on
graduate students of [omitted for blind review] and university
students [omitted for blind review]. The tests were
implemented as a program written in Unity and launched in
HTC Vive VR glasses.
The subject underwent brief instruction on how to put on
and take off helmet, how to use controllers, how to enter the
answer on the virtual keyboard, how to grab and release virtual
objects, how to move. After the briefing, the subject proceeded
to testing under the supervision of an observer who observed
during the first 6 tasks. The observer could not answer the
questions of the subject related to the specifics of the tests and
interpretation of the questions. The observer helped to get used
to the technique and interaction. The first 6 tasks simulate
testing so that the subject learns how to use the headset. In
addition, the first few tasks, the subjects often delayed the
passage, playing with virtual objects and studying technology.
After passing 6 tests, the observer retired so that no one could
influence the subject.
The test subject had to complete 65 tasks:
size - 3 quantitative A tests and 3 categorical B tests;
color tone - 3 quantitative A tests and 3 categorical B
tests;
color saturation - 3 quantitative A tests and 3
categorical B tests;
transparency - 3 quantitative A tests and 3 categorical
B tests;
rotation - 3 quantitative A tests and 3 categorical B
tests;
scaling - 3 quantitative A tests and 3 categorical B
tests;
movement - 3 quantitative A tests and 3 categorical B
tests;
shape –3 categorical B tests;
primary color –3 categorical B tests;
texture - 3 categorical B tests;
cubic coordinate system - 3 quantitative A tests;
radial coordinate system - 3 quantitative A tests;
size and shape - 3 quantitative A tests;
graph perception - 5 tests.
It should be noted the last four types of tests.
Size and shape - this test was added to assess the difference
in perception of size in different shapes. Graph perception test -
included an approximate calculation of the number of vertices
and edges of large graphs. These two types of tests cannot be
used for comparison with others, since the conditions for their
implementation vary: the size and shape test has two base
parameters instead of one, and testing the graph differs from
testing methods A and B.
The cubic and radial coordinate systems were tested only
according to the A method, since categories cannot be
expressed using coordinates. In the test, an object was
presented, and it was necessary to determine its coordinate. We
can say that the condition of test A is not met, since in this test
there is only one object, and not two. But the condition of test
A is met due to the fact that the user compares the object with
the origin of the coordinate axes, so he compares two objects,
as in other tests of category A.
On average, testing took 50 minutes. It was in connection
with the duration of the test that the decision was made to limit
itself to this set of tests, since for such a long time the subjects
get tired, which can affect the test results.
Test results
A total of 56 measurements were obtained. As a result of
testing, the distributions of accuracy and time were obtained.
The figures show the distribution in the shape of boxes with a
mustache.
The accuracy was normalized in such a way that the correct
answer is at around 0. Deviation by 1 indicates a deviation of
100% from the correct answer. For example: the correct
answer is 5 (on chart value is 0), this answer is 10 (on chart
value is 1) or 2.5 (on chart value is 0.5).
Figure 1 – Test A Accuracy
The accuracy of components of test A were divided into
three categories (Figure 1):
Precise components. This category includes components
with an accuracy of the upper quantile Q3 which is less than 1:
move, scale, rotate and a cubic coordinates.
Components with errors. This category includes
components with an accuracy of the upper quantile Q3 which
are greater than 1 and less than 2: radial coordinates, opacity
and size.
Inaccurate components. This category includes components
with an accuracy of the upper quantile Q3 which is more than
2: hue, saturation.
Figure 2 – Test B Accuracy
The accuracy of components of test B were divided into
three categories (Figure 2):
Precise components. This category includes components
with an accuracy of the upper quantile Q3 which is equal to 0:
move, scale, rotate, color and shape.
Components with errors. This category includes
components with an accuracy of the upper quantile Q3 which
is less than 1: size, opacity and texture.
Inaccurate components. This category includes components
with an accuracy of the upper quantile Q3 which is greater than
1: hue, saturation.
The speed was calculated as the difference between the
start of the test (when the objects appeared before the user) and
its end (when the answer was entered). The graph shows the
time in seconds.
Figure 3 – Test A Speed
The speed of components of test A were divided into three
categories (Figure 3):
Fast components. This category includes components with
an accuracy of the upper quantile Q3 which is less than 30
seconds: size, saturation, opacity, scale, move.
Acceptable components. This category includes
components with an accuracy of the upper quantile Q3 which
is less than 60 seconds: rotation, hue.
Slow components. This category includes components with
an accuracy of the upper quantile Q3 which is more than 60
seconds: cubic coordinates, radial coordinate.
Figure 4 – Test B Speed
The speed of components of test B were divided into three
categories (Figure 4):
Fast components. This category includes components with
an accuracy of the upper quantile Q3 which is less than 30
seconds: rotate, scale, move.
Acceptable components. This category includes
components with an accuracy of the upper quantile Q3 which
is less than 60 seconds: size, shape, color, saturation and
texture.
Slow components. This category includes components with
an accuracy of the upper quantile Q3 which is more than 60
seconds: opacity, hue.
The final table of tests is as follows.
Component Accuracy –
test A
Accuracy –
test B
Time –
test A
Time –
test B
Size
with
errors
with
errors
fast
acceptable
Shape - accurate - acceptable
Color
-
accurate
-
acceptable
Hue inaccurate inaccurate acceptable slow
Saturation inaccurate inaccurate fast acceptable
Opacity
with
errors
with
errors
fast
slow
Texture - with errors - acceptable
Cubic coord.
system
accurate - slow -
Radial coord.
system
with errors - slow -
Rotation accurate accurate acceptable fast
Scaling
accurate
accurate
fast
fast
Move
ment
accurate
accurate
fast
fast
Table 1 – The effectiveness of the components
In addition to the main tests, a test was conducted on the
perception of size depending on the shape and on the
perception of large-scale structures (graphs with the number of
vertices from 30 to 700) (Figures 5, 6).
Figure 5 – Accuracy tests A, B, size and shape, large-scale
structure
Figure 6 – Speed tests A, B, size and shape, large-scale
structure
The results of the size perception test depending on the
shape show that the shape affects the size perception mainly in
the analysis time.
The results of a large-scale structure perception test show
that perception is both accurate and speed at approximately the
same level as test A.
These two additional tests show that the components
themselves and the number of objects also affect each other.
These two tests cannot be compared with the others, since they
violate the conditions regarding the number of displayed
metrics and the number of displayed objects. But based on
them, the following conclusions can be drawn.
When testing the size and shape, an analog of test A the
display of several metrics at once negatively affects the
perception.
When evaluating a large-scale structure (B test analog – the
simplest type of perception – the “presence” is tested)
perception of the metric both in speed and in accuracy
significantly exceeds all similar B tests.
These two tests show that the selection of effective
components is not enough to predict a certain level of
perception in advance, since visualization has emergent
properties. Based on the effectiveness of the component,
design decisions should be made to maximize the ability to
obtain an effective visualization model. The resulting
component efficiency table 1 can be used to build visualization
models, but it does not have predictive capabilities..
An example of passing the test can be found here:
https://youtu.be/z_dx1WcxR4c
V. CONCLUSION
The article proposes possible VR/AR visualization models
for the investigated information security processes, as well as
their correspondence to the security tasks that they solve.
Experimental evaluation of effectivity was conducted on
the results of which a strategy for using various visualization
components can be based.
This result should increase the performance indicators
(accuracy and speed) of the operator’s interaction with the data
obtained by information security systems.
The approach can be applied in the field of information
security analytics to analyze large volumes of data, which
require both simple and complex visualization models, and
should prioritize accuracy and speed.
Further research will concern the search for optimal ways
of human-computer interaction with information security data
in virtual augmented reality, as well as the development of a
methodology for interaction with information security metrics
in virtual and augmented reality.
ACKNOWLEDGMENT
REFERENCES
[1] Kwon O.-H., Muelder C., Lee K., Ma K.-L. Spherical layout and
rendering methods for immersive graph visualization, Visualization
Symposium (PacificVis), 2015 IEEE Pacific, 2015.
[2] Becker A.A., Designing Virtual Reality Data Visualizations, OpenViz
Conference, 2016.
[3] Official website of MolDRIVE. URL:
http://graphics.tudelft.nl/~michal/vr_demos Accessed: 26/10/19).
[4] Official website of Landrian Networks. URL:
http://www.landriannetworks.com Accessed: 26/10/19).
[5] Official website of ForeScout. URL: https://forescout.com/ Accessed:
26/10/19).
[6] Official website of Virtual Cove. URL: http://virtualcove.com
(Accessed: 26/10/19).
[7] Dam A., Laidlaw D. H., Simpson R. M., Experiments in Immersive
Virtual Reality for Scientific Visualization, Computers & Graphics,
N26, 2002.
[8] Marcio Henrique da Silva, Andre Cotelli do Espírito Santo, Eugenio
Rangel Marins, Ana Paula Legey de Siqueira, Daniel Machado Mol,
Antonio Carlos de Abreu Mol, Using virtual reality to support the
physical security of nuclear facilities, Progress in Nuclear Energy, №78,
2015.
[9] Jensen C. S., Lu H., Yang B. Graph model based indoor tracking //2009
Tenth International Conference on Mobile Data Management: Systems,
Services and Middleware. – IEEE, 2009. – P. 122-131.
[10] omitted for blind review
[11] Jacobs J., Rudis B. Data-driven security: analysis, visualization and
dashboards. – John Wiley & Sons, 2014.
[12] omitted for blind review
[13] Dinulescu A., Ursulean G. Cyberspace cartography //Romanian
Military Thinking. – 2015. – №. 2.
[14] Marty R. Applied security visualization. – Upper Saddle River :
Addison-Wesley, 2009. – P. 552.
[15] omitted for blind review
[16] Noel S., Jacobs M. Kalapa P., Jajodia S. Multiple coordinated views for
network attack graphs //IEEE Workshop on Visualization for Computer
Security, 2005.(VizSEC 05). – IEEE, 2005. – P. 99-106.
[17] omitted for blind review
[18] Zhao H., Tang W., Zou X., Wang Y., Zu Y. Analysis of Visualization
Systems for Cyber Security //Recent Developments in Intelligent
Computing, Communication and Devices. – Springer, Singapore, 2019.
– P. 1051-1061.
[19] omitted for blind review
[20] omitted for blind review
[21] Flake H. Graph-based binary analysis //Blackhat Briefings 2002. –
2002.
[22] Novikova E., Murenin I. Visualization-Driven Approach to Anomaly
Detection in the Movement of Critical Infrastructure //International
Conference on Mathematical Methods, Models, and Architectures for
Computer Network Security. – Springer, Cham, 2017. – P. 50-61.
[23] Whitaker R. B. Applying information visualization to computer security
applications. Master’s thesis, Utah State University, January 2010.
[24] Tricaud S., Nance K., Saade P. Visualizing network activity using
parallel coordinates. In Proc. of the 44th Hawaii International
Conference on System Sciences (HICSS’11), The Grand Hyatt Kauai
Resort and Spa Kauai, USA, pages 1–8. IEEE, January 2011.
[25] omitted for blind review