Conference PaperPDF Available

Assessing Attack Surface with Component-Based Package Dependency


Abstract and Figures

Package dependency has been considered in many vulnerability assessment systems. However, existing approaches are either coarse-grained and do not accurately reveal the influence and severity of vulnerabilities, or do not provide comprehensive (both incoming and outgoing) analysis of attack surface through package dependency. We propose a systematic approach of measuring attack surface exposed by individual vulnerabilities through component level dependency analysis. The metric could potentially extended to calculate attack surfaces at component, package, and system levels. It could also be used to calculate both incoming and outgoing attack surfaces, which enables system administrators to accurately evaluate how much risk that a vulnerability, a component or a package to the complete system, and the risk that is injected to a component or package by packages it depends on in a given system. To our best knowledge, our approach is the first to quantitatively assess attack surfaces of vulnerabilities, components, packages, and systems through component level dependency.
Content may be subject to copyright.
Assessing Attack Surface with
Component-Based Package Dependency
Su Zhang1(B
), Xinwen Zhang2, Xinming Ou3,
Liqun Chen4, Nigel Edwards4, and Jing Jin5
1Symantec Corporation, California, USA
2Samsung Research America, Mountain View, USA
3University of South Florida, Tampa, USA
4Hewlett-Packard Laboratories, Bristol, UK
5Intuit Inc., California, USA
Abstract. Package dependency has been considered in many vulner-
ability assessment systems. However, existing approaches are either
coarse-grained and do not accurately reveal the influence and severity
of vulnerabilities, or do not provide comprehensive (both incoming and
outgoing) analysis of attack surface through package dependency. We
propose a systematic approach of measuring attack surface exposed by
individual vulnerabilities through component level dependency analysis.
The metric could potentially extended to calculate attack surfaces at
component, package, and system levels. It could also be used to calcu-
late both incoming and outgoing attack surfaces, which enables system
administrators to accurately evaluate how much risk that a vulnerabil-
ity, a component or a package to the complete system, and the risk that
is injected to a component or package by packages it depends on in a
given system. To our best knowledge, our approach is the first to quanti-
tatively assess attack surfaces of vulnerabilities, components, packages,
and systems through component level dependency.
1 Introduction
Attack surface usually refers to exploitable resource exposed to attackers [18,19].
The attack surface brought by a vulnerability could be dramatically enlarged
when more packages installed depending on the vulnerable application because
more resource can be accessed by the attacker to exploit the vulnerability.
Therefore the attack surface metric could serve as an effective indicator for
vulnerability assessment, which is considered as a critical task for security prior-
itization. Currently, the well known and de facto standard vulnerability scoring
system – common vulnerability scoring system (CVSS) [21] – quantifies the risk
Springer International Publishing Switzerland 2015
M. Qiu et al. (Eds.): NSS 2015, LNCS 9408, pp. 405–417, 2015.
DOI: 10.1007/978-3-319-25645-0 29
406 S. Zhang et al.
for each known vulnerability. Specifically, CVSS measures exploitability met-
rics (access vector, access complexity, and authentication) and impact metrics
(confidentiality, integrity, and availability loss) of a vulnerability, which are then
used to calculate a base score ranging from 0 to 10 indicating the severity of the
Moreover, CVSS does not take into consideration of package dependency,
which, based on our analysis in this paper, dramatically affects the exploitability
of a vulnerability, especially when it appears in a prevalent package used by
many other packages. Therefore current CVSS does not reveal the fact that
vulnerabilities on highly depended packages usually bring larger attack surfaces
compared to those detected on a client application, even when they have the
same CVSS scores. Because packages depended by a number of applications are
usually more exposable than “ground” software (with no dependent), attackers
have more incentive to intrude a system through each of these dependents (or
their dependents). Therefore, the attack surface brought by package dependency
should not be ignored, and accurately measuring the attack surface is non-trivial
when evaluating vulnerability severity.
Researchers have proposed to measure risk with the consideration of package
dependency. Neuhaus et al. [23] study package dependency on Red Hat systems,
and infer beauty packages (with low risk) and beast packages (high risk) based
on the inter-package dependencies and historical vulnerability information for
each package. Their output can be used by developers to choose dependable
packages with low risks or historical vulnerabilities. But they only consider the
number of historical vulnerabilities as the risk factor for each package, rather
than measuring the attack surface brought by known vulnerabilities on a given
system. Raemaekers et al. [31] study the risk of a package brought by third party
libraries. They evaluate potential risks from third party applications by consid-
ering if the referenced packages are well scrutinized, the number of referenced
packages, and the number of classes with referenced libraries. However, they only
measure incoming risk (risk brought by third party libraries) at package level, and
do not consider any finer-grained (component level) or coarser-grained (system
level) incoming attack surface. Moreover, this work does not evaluate outgoing
attack surfaces, which are brought by individual vulnerabilities, components,
and packages to a system, and are important inputs when prioritizing security
related plans such as patching and hardening by system administrators, and
choosing dependent packages for developers.
With our approach, vulnerability and component level metrics can assist sys-
tem administrators in prioritizing patching or hardening plans towards the entire
system, while the overall package and system level metrics can help developers to
choose secure and reliable development images, platforms, and specific systems.
Our solution also helps other stakeholders to observe the evolution of package
dependency based attack surface for a given system.
Assessing Attack Surface with Component-Based Package Dependency 407
2.1 A Real Motivating Example
To motivate our attack surface analysis with package dependency, we systemati-
cally analyze the risk trend of a set of VMware products through VMware Secu-
rity Advisories (VMSA)1. Each VMSA indicates an official notification regard-
ing a set of known security vulnerabilities that affects VMware products, each
of which represents a Common Vulnerabilities and Exposures (CVE) record
included in the U.S. National Vulnerability Database (NVD2). Each VMSA
entry includes the origin of the vulnerabilities, vulnerability IDs, affected appli-
cations, and proposed solutions to the issue. Based on our analysis of VMSA
entries from July 2007 to December 2012, we find out that almost two thirds
(56/90) of the VMSAs include vulnerabilities originated from third party appli-
cations that affect VMware products, as Table 1shows. For instance, ESX –
the last generation hypervisor – may be exploited by vulnerabilities described
in 27 VMSAs detected on the Linux management console, which provides man-
agement functions for ESX like executing scripts or installing third party agents
for hardware monitoring, backup, and system management [1]. For another
instance, Java Runtime Environment (JRE) is required by a number of VMware
products including ESX, Server, vMA, vCenter, and vCenter Update Manager,
therefore a known vulnerability on JRE could possibly make each of these prod-
ucts exploitable. Other major attack surface carriers include OpenSSL (9 out of
90), Kerberos 5 (8 out of 90), Apache Tomcat (6 out of 90), and libxml (6 out of
90). Note that one VMSA usually mentions multiple risks included in different
applications (See Table 1for details).
Table 1. Risks from Third Party Packages to VMware Products
Third-party Package Name #ofVMSAs Affected VMware Products
Console Operating System 27 ESX
JRE 11 ESX, Server, vMA, vCenter,
vCenter Update Manager
OpenSSL 9 ESX, ESXi, vCenter
kerberos5 8ESX, ESXi
Apache Tomcat 6 ESX, vCenter
libxml 6ESX
Our analysis with VMSA motivates a security metric with the considera-
tion of package dependency, which can help system administrator and software
developer to identify vulnerabilities on highly depended programs (e.g., JRE and
Linux console) with larger attack surfaces, compared to others such as client side
vulnerabilities (see Figure 1). Consequently, the system administrator may want
408 S. Zhang et al.
Fig. 1. Comparison of attack paths to a vulnerable client side application Qand a
highly depended library P.
to patch a JRE vulnerability affecting a number of products earlier than others
even they may have the same CVSS score. A system level metric can also help
stakeholders in choosing system images with smaller attack surface and monitor
how the dependency based attack surfaces evolve over time.
2.2 Why Component Level Dependency Analysis?
From the perspective of software engineering, a system can be decomposed into
various of packages. One package can usually be further divided into one or more
components, each of which is made up from classes with related functions. From
above motivating example with VMSA, we have seen attack surfaces from third
party packages should not be ignored for risk analysis, and we need to look into
package dependencies to know how the attack surface is injected by external
packages to a system. When measuring such dependency based attack surfaces,
we analyze at component level for the following reasons.
More accurate dependency information than package level: Component level
dependency is finer-grained than package level, therefore it could locate attack
surfaces with higher accuracy. As Figure 2shows, given two packages with the
same dependency map at package level, their attack surfaces could vary signif-
icantly if known vulnerabilities on the two packages are on components with
different dependency maps. Also, components on the same package should be
differentiated as their effects on the attack surface can be significantly different.
Less complex dependency information than class level: We keep our dependency
analysis at component level rather than go further into class or object level
because it is usually difficult to distinguish the sources or causes of vulnerabili-
ties at that level. Each component is a unit to realize a set of related functions.
Classes within the same component are usually more integrated and interacted
Assessing Attack Surface with Component-Based Package Dependency 409
Fig. 2. One package level dependency with two different component level dependencies.
compared to those in different components. Therefore for each vulnerability, its
exploitability highly depends on its accessibility at the component level. Previ-
ous studies also show that a vulnerability becomes significantly more exploitable
when attackers know that its component is accessible [25,26]. Besides, it is usu-
ally difficult to construct a map between vulnerabilities and the classes on which
they detected. Furthermore, proprietary software vendors usually do not disclose
their product information at class level. However security bugs and alerts are
usually maintained by database like Bugzilla at component level3, which makes
the vulnerability-component map retrievable [24]. Moreover, the complexity of
a class level dependency map is exponentially higher compared to a component
level dependency graph. We believe it is infeasible to achieve efficient analysis
with class level graph when dealing with a complex system including a large
number of software packages.
3 Dependency-Based Attack Surface Analysis
This section explains the details of our dependency-based attack surface analysis.
Before that we explain the definitions for various attack surface metrics.
3A vulnerability is usually identified as a security bug in Bugzilla.
410 S. Zhang et al.
3.1 Package Dependency at Component Level
In general, a package dependency refers to a code reuse by a component from
the library packages that it relies upon. Such code reuse could be at either
binary or source code level. For example, third party code could be called as
a compiled jar file or be imported as head files in source code. As shown in
Figure 2, each directed line represents one dependency relationship, where the
destination node represents the package or component that reuses some codes
from the source node package or component.
In our analysis, we do not differentiate dependency strength at component
level. Even though other metrics such as the number of references between the
two components can be obtained and used as the weight, the correlation between
these metrics and the strength of dependency is difficult to be determined and
judged without a comprehensive analysis over the source code of a target pack-
age. Therefore, we assign an equal weight 1 to each dependency between two
components in our analysis. But we still keep a weight variable in our algo-
rithms just for future customization of the dependency weight based on different
3.2 Component-Based Attack Surface Analysis
Vulnerability Attack Surface. We define VAS as a system wide package
dependency based attack surface originated from a given vulnerability. VAS can
be used to compare the exploitabilities of different vulnerabilities within the same
system. The comparison results can be used to prioritize patching or hardening
tasks at vulnerability level.
As Algorithm 1shows, for each vulnerability, we first identify its compo-
nent. Usually, the vulnerability-component map is provided by software vendors
through security advisories, e.g., Oracle Security Advisories4. Starting from the
component of the target vulnerability, we do a breadth first search until depth
d, where dis the level of dependency. For example, if package padepends on pb
which depends on pc, then when evaluating pa,paand pbare considered but not
pcif dis one. However, all of them are considered when dis larger than one. The
depth could be customized based on user preferences. Each component (directly
or indirectly) depending on the vulnerable component is considered as part of
the attack surface brought by the vulnerability. The impact factor on each com-
ponent is the attack surface of the target vulnerability exposed through that
component. We assign the CVSS score of the vulnerability as the impact factor
of the component where it resides (the ‘vulnerable component’)5. For compo-
nents on multiple depending chains from the vulnerable component, we only
consider its closest dependency and ignore the rest. For example, component ca
depends on cbwhich depends on cc,andcaalso depends on ccdirectly. Under
5The calculation of impact factors of dependent components will be illustrated in the
following paragraph.
Assessing Attack Surface with Component-Based Package Dependency 411
this circumstance, we ignore the dependency ca=cb=ccbut only consider
We define a damping factor6(ranging from 0 to 1) to represent the resid-
ual risk after each level of dependency, which is used to estimate attack surface
from/to nested depended packages. The impact factor on a given component
equals to the multiplication of the dependency impact value from the component
it depends on (the dependency impact value is returned by function depImpact
2) when c1depends on c2. We assign “1” to all impact values in our exper-
iments because we treat all dependencies equally as mentioned in Section 3.1),
the damping factor and the impact factor of the component it depends on. Their
impact factor values will be eventually added up to one number, indicating the
attack surface of the given vulnerability to the whole system.
In a nutshell, we process a weighted (component-based) dependency graph
through breadth first search, we calculate an impact factor for each component
(within the dependency graph from the vulnerable component) from the given
vulnerability. We then add up all of these impact factors into one number, indi-
cating the attack surface exposed by the target vulnerability.
4 Future Work
We propose an attack surface at vulnerability level. The metric could also be
aggregated into higher levels. Component level attack surface will let state hold-
ers to know how much risk is brought by each individual component and plan
hardening accordingly. Package level attack surface can be used to determine
which package to depend upon among similar packages. System level attack sur-
face can be used to indicate the health level of individual systems/images. This
will help potential users to decide which image to use. Experiments can also
be conducted under different environments [5,16,2830,41,42,46,53,54,56,57]
along with other approaches [14,15,32,3440,44,45,48,51,52]. Moreover, pre-
sentation tools like attack graph [10,12,17,47,49,50,55] can be used to visualize
risks from software dependencies.
5 Related Work
Risks from package dependency have been well researched [2,4,7,13,23,25,31,
43,58]. Neuhaus et al. [23] evaluate risk per Red Hat package based on histor-
ical security vulnerabilities and package dependencies. But they do not eval-
uate attack surface exposed by individual vulnerabilities. Besides, they only
measure outgoing risk but not incoming risk for each package. Raemaekers et
al. [31] explore the risk from third party applications. Instead of measuring
6We assign 0.1 as the damping factor for our experiments
7We assign “1” to all DIV as mentioned in Section 3.1
8The damping factor represents the residual risk after each level of dependency. User
can assign a value between 0 and 1 based on their own estimation.
412 S. Zhang et al.
Algorithm 1 . Dependency-based Attack Surface Measurement for Individual
Vulnerabilities: VA S ( v0,d)
Input: Parameters:v0– the Target vulnerability; d– Depth of assessment.
System configurations:
A map between the vulnerability v0and its component component (v0).
A system wide component dependency map (dependents of component c are depen-
Output: The package dependency based attack surface VAS brought by vulnerability
c0component(v0){Retrieve the vulnerable component}
Queue Q (c0,0) {Q is a queue of pairs (vulnerableComponent, depth)}
Tabl e v0.t empty table
{v0.t is a table tracking processed components. The key is the affected component
and the value is its impact factor from vulnerability v0.}
0.cvss){The impact factor of c0equals to the CVSS score of v0}
while Q is not empty do
if ndthen
continue {if current component has already reached the pre-defined deepest
level, then no need to retrieve its dependents}
end if
for each ckin dependOn(c) do
if v0.t.containsKey (ck)then
{If the component has been previously processed, then we skip it}
end if
Q.enqueue(ck,n+1){Update Q in order to process dependents of ckif within
our predefined depth}
IFc=v0.t.get(c){retrieve the impact factor of the current component c}
DIV =depI mpact(c, ck)
{depImpact(c, ck) returns dependency impact value7between cand ck.}
IF =DIV ×DF ×IFc{DF means Damping Factor8. This is the calculation
of impact factor (IF) of component ck}
VAS+=IF {Cumulatively update attack surface}
v0.t.put(ck,IF){Update processed element table}
end for
end while
return VAS {Sum up all impact factors of v0into VAS}
attack surface from individual known vulnerabilities, they focus on if a refer-
enced package is well scrutinized and the prevalence of usage per package. A set
of work [2,13,43,58] study the importance of component level dependency when
assessing software quality but no concrete security metric has been proposed.
Chowdhury et al. [4] evaluate risk from source code (class) level of dependency
(e.g. complexity, coupling, and cohesion). However, their work is about inferring
unknown vulnerabilities rather than evaluate attack surface for known vulnera-
Assessing Attack Surface with Component-Based Package Dependency 413
A number of work study risks from Java applications [6,8,9,20,22,26,27].
Nasiri et al. [22] evaluate the attack surface from J2EE and .Net platform by
quantitatively comparing their CVSS scores directly, but no package dependency
is considered during the evaluation. Drake et al. [6] evaluate JRE memory cor-
ruption attack surface from engineering point of view, but they do not provide
quantitative measurement of the attack surface. Gong et al. [9] retrospect the
evolution of security mechanism on Java in the past ten years at high level. Both
erez et al. [27] and Goichon et al. [8] propose vulnerability detection approaches
after scanning Java source code. Marouf [20] classifies vulnerabilities specific to
Java and proposes possible countermeasures against these threats. Similarly, Par-
rend et al.[26] classify Java vulnerability at component level rather than source
code level.
Work regarding attack surface evaluation have been conducted by
researchers [3,11,18,19,24,24,33]. Neuhaus et al. [24] rank vulnerable compo-
nents in Firefox based on historical detected vulnerabilities. Similar to us, they
evaluate risk at component level. However, they consider these components as
independent units rather than inter-depended nodes.
The definition of attack surface is also adapted in industry. Similar to [18],
which evaluates attack surface over Linux systems, Microsoft attack surface9
focuses on Windows by enlisting a number of threats based on the configura-
tion of a given system. However, none of these takes package dependency into
consideration while measuring system attack surface.
6 Conclusions
We define attack surface exposed through package dependency at vulnerability
level. Besides outgoing attack surfaces, we propose algorithms calculating incom-
ing attack surfaces injected through package dependency into individual compo-
nents and packages. Our approach provides systematic methodology to prioritize
security tasks for system administrators, and provides inputs for choosing system
images for application developers with multiple dependency options.
1. VMware ESX and VMware ESXi - The Market Leading Production-
Proven Hypervisors. VMware Inc. (2009).
VMware-ESX-and- VMware-ESXi-DS-EN.pdf
2. Abate, P., Di Cosmo, R., Boender, J., Zacchiroli, S.: Strong dependencies between
software components. In: Proceedings of the 2009 3rd International Symposium
on Empirical Software Engineering and Measurement, pp. 89–99. IEEE Computer
Society (2009)
3. Cheng, P., Wang, L., Jajodia, S., Singhal, A.: Aggregating cvss base scores for
semantics-rich network security metrics. In: Proceedings of the 31st IEEE Interna-
tional Symposium on Reliable Distributed Systems (SRDS 2012). IEEE Computer
Society (2012)
414 S. Zhang et al.
4. Chowdhury, I., Zulkernine, M.: Can complexity, coupling, and cohesion metrics
be used as early indicators of vulnerabilities?. In: Proceedings of the 2010 ACM
Symposium on Applied Computing, pp. 1963–1969. ACM (2010)
5. DeLoach, S.A., Ou, X., Zhuang, R., Zhang, S.: Model-driven, moving-target defense
for enterprise network security. In: Bencomo, N., France, R., Cheng, B.H.C.,
Aßmann, U. (eds.) Models@run.time. LNCS, vol. 8378, pp. 137–161. Springer,
Heidelberg (2014)
6. Drake, J.J.: Exploiting memory corruption vulnerabilities in the java runtime
7. Ellison, R.J., Goodenough, J.B., Weinstock, C.B., Woody, C.: Evaluating and mit-
igating software supply chain security risks. Technical report, DTIC Document
8. Goichon, F., Salagnac, G., Parrend, P., Fenot, S.: Static vulnerability detection
in java service-oriented components. Journal in Computer Virology, 1–12 (2012)
9. Gong, L.: Java security: a ten year retrospective. In: Annual Computer Security
Applications Conference, ACSAC 2009, pp. 395–405. IEEE (2009)
10. Homer, J., Zhang, S., Ou, X., Schmidt, D., Du, Y., Rajagopalan, S.R., Singhal,
A.: Aggregating vulnerability metrics in enterprise networks using attack graphs.
Journal of Computer Security 21(4), 561–597 (2013)
11. Howard, M., Pincus, J., Wing, J.: Measuring relative attack surfaces. In: Computer
Security in the 21st Century, pp. 109–137 (2005)
12. Huang, H., Zhang, S., Ou, X., Prakash, A., Sakallah, K.: Distilling critical attack
graph surface iteratively through minimum-cost sat solving. In: Proceedings of the
27th Annual Computer Security Applications Conference, pp. 31–40. ACM (2011)
13. Khan, M.A., Mahmood, S.: A graph based requirements clustering approach for
component selection. Advances in Engineering Software 54, 1–16 (2012)
14. Li, T., Zhou, X., Brandstatter, K., Raicu, I.: Distributed key-value store on hpc
and cloud systems. In: 2nd Greater Chicago Area System Research Workshop
(GCASR). Citeseer (2013)
15. Li, T., Zhou, X., Brandstatter, K., Zhao, D., Wang, K., Rajendran, A., Zhang,
Z., Raicu, I.: Zht: A light-weight reliable persistent dynamic scalable zero-hop
distributed hash table. In: 2013 IEEE 27th International Symposium on Parallel
& Distributed Processing (IPDPS), pp. 775–787. IEEE (2013)
16. Liu, X., Edwards, S., Riga, N., Medhi, D.: Design of a software-defined resilient vir-
tualized networking environment. In: 11th International Conference on the Design
of Reliable Communication Networks (DRCN), pp. 111–114. IEEE (2015)
17. Lv, Z., Su, T.: 3D seabed modeling and visualization on ubiquitous context. In:
SIGGRAPH Asia 2014 Posters, SA 2014, pp. 33:1–33:1. ACM, New York (2014)
18. Manadhata, P., Wing, J.M.: Measuring a system’s attack surface. Technical report,
DTIC Document (2004)
19. Manadhata, P.K., Wing, J.M.: An attack surface metric. IEEE Transactions on
Software Engineering 37(3), 371–386 (2011)
20. Marouf, S.M.: An Extensive Analysis of the Software Security Vulnerabilities that
exist within the Java Software Execution Environment. PhD thesis, University of
Wisconsin (2008)
21. Mell, P., Scarfone, K., Romanosky, S.: A complete guide to the common vulnerabil-
ity scoring system version 2.0. In: Published by FIRST-Forum of Incident Response
and Security Teams, pp. 1–23 (2007)
22. Nasiri, S., Azmi, R., Khalaj, R.: Adaptive and quantitative comparison of J2EE
vs. net based on attack surface metric. In: 2010 5th International Symposium on
Telecommunications (IST), pp. 199–205. IEEE (2010)
Assessing Attack Surface with Component-Based Package Dependency 415
23. Neuhaus, S., Zimmermann, T.: The beauty and the beast: vulnerabilities in red
hat’s packages. In: Proceedings of the 2009 Conference on USENIX Annual Tech-
nical Conference, USENIX 2009, p. 30. USENIX Association, Berkeley (2009)
24. Neuhaus, S., Zimmermann, T., Holler, C., Zeller, A.: Predicting vulnerable soft-
ware components. In: Proceedings of the 14th ACM Conference on Computer and
Communications Security, pp. 529–540. ACM (2007)
25. Parrend, P.: Enhancing automated detection of vulnerabilities in java components.
In: International Conference on Availability, Reliability and Security, ARES 2009,
pp. 216–223. IEEE (2009)
26. Parrend, P., Fenot, S.: Classification of component vulnerabilities in java ser-
vice oriented programming (SOP) platforms. In: Chaudron, M.R.V., Ren, X.-M.,
Reussner, R. (eds.) CBSE 2008. LNCS, vol. 5282, pp. 80–96. Springer, Heidelberg
27. P´erez, P.M., Filipiak, J., Sierra, J.M.: LAPSE+ static analysis security software:
Vulnerabilities detection in java EE applications. In: Park, J.J., Yang, L.T., Lee, C.
(eds.) FutureTech 2011, Part I. CCIS, vol. 184, pp. 148–156. Springer, Heidelberg
28. Qian, H., Andresen, D.: Jade: An efficient energy-aware computation offloading
system with heterogeneous network interface bonding for ad-hoc networked mobile
devices. In: 15th IEEE/ACIS International Conference on Software Engineering,
Artificial Intelligence, Networking and Parallel/Distributed Computing (SNPD)
29. Qian, H., Andresen, D.: Emerald: Enhance scientific workflow performance with
computation offloading to the cloud. In: 2015 IEEE/ACIS 14th International Con-
ference on Computer and Information Science (ICIS), pp. 443–448. IEEE (2015)
30. Qian, H., Andresen, D.: An energy-saving task scheduler for mobile devices. In:
2015 IEEE/ACIS 14th International Conference on Computer and Information
Science (ICIS), pp. 423–430. IEEE (2015)
31. Raemaekers, S., van Deursen, A., Visser, J.: Exploring risks in the usage of third
party libraries. In: The Goal of the BElgian-NEtherlands Software eVOLution
Seminar, p. 31 (2011)
32. Su, Y., Wang, Y., Agrawal, G., Kettimuthu, R.: Sdquery dsi: integrating data
management support with a wide area data transfer protocol. In: Proceedings of the
International Conference on High Performance Computing, Networking, Storage
and Analysis, p. 47. ACM (2013)
33. Vijayakumar, H., Jakka, G., Rueda, S., Schiffman, J., Jaeger, T.: Integrity walls:
Finding attack surfaces from mandatory access control policies. In: Proceedings of
the 7th ACM Symposium on Information, Computer, and Communications Secu-
rity (ASIACCS 2012), May 2012
34. Wang, J.J.-Y., Sun, Y., Gao, X.: Sparse structure regularized ranking. Multimedia
Tools and Applications, 1–20 (2014)
35. Wang, K., Liu, N., Sadooghi, I., Yang, X., Zhou, X., Lang, M., Sun, X.-H., Raicu, I.:
Overcoming hadoop scaling limitations through distributed task execution
36. Wang, K., Zhou, X., Chen, H., Lang, M., Raicu, I.: Next generation job manage-
ment systems for extreme-scale ensemble computing. In: Proceedings of the 23rd
International Symposium on High-Performance Parallel and Distributed Comput-
ing, pp. 111–114. ACM (2014)
37. Wang, K., Zhou, X., Qiao, K., Lang, M., McClelland, B., Raicu, I.: Towards
scalable distributed workload manager with monitoring-based weakly consistent
resource stealing. In: Proceedings of the 24rd International Symposium on High-
Performance Parallel and Distributed Computing, pp. 219–222. ACM (2015)
416 S. Zhang et al.
38. Wang, K., Zhou, X., Li, T., Zhao, D., Lang, M., Raicu, I.: Optimizing load bal-
ancing and data-locality with data-aware scheduling. In: 2014 IEEE International
Conference on Big Data (Big Data), pp. 119–128. IEEE (2014)
39. Wang, Y., Nandi, A., Agrawal, G.: Saga: array storage as a DB with support for
structural aggregations. In: Proceedings of the 26th International Conference on
Scientific and Statistical Database Management, p. 9. ACM (2014)
40. Wang, Y., Su, Y., Agrawal, G.: Supporting a light-weight data management layer
over hdf5. In: 2013 13th IEEE/ACM International Symposium on Cluster, Cloud
and Grid Computing (CCGrid), pp. 335–342. IEEE (2013)
41. Wei, F., Roy, S., Ou, X., Robby.: Amandroid: A precise and general inter-
component data flow analysis framework for security vetting of android apps. In:
Proceedings of the 2014 ACM SIGSAC Conference on Computer and Communi-
cations Security, pp. 1329–1341. ACM (2014)
42. Xiong, H., Zheng, Q., Zhang, X., Yao, D.: Cloudsafe: Securing data processing
within vulnerable virtualization environments in the cloud. In: 2013 IEEE Confer-
ence on Communications and Network Security (CNS), pp. 172–180. IEEE (2013)
43. Yamaguchi, F., Lindner, F., Rieck, K.: Vulnerability extrapolation: assisted discov-
ery of vulnerabilities using machine learning. In: Proceedings of the 5th USENIX
conference on Offensive Technologies, p. 13. USENIX Association (2011)
44. Zhang, H., Diao, Y., Immerman, N.: Recognizing patterns in streams with impre-
cise timestamps. Proceedings of the VLDB Endowment 3(1–2), 244–255 (2010)
45. Zhang, H., Diao, Y., Immerman, N.: On complexity and optimization of expensive
queries in complex event processing. In: Proceedings of the 2014 ACM SIGMOD
International Conference on Management of Data, pp. 217–228. ACM (2014)
46. Zhang, S.: Deep-diving into an easily-overlooked threat: Inter-vm
attacks. Whitepaper, provided by Kansas State University, TechRepub-
lic/US2012 (2013).
47. Zhang, S.: Quantitative risk assessment under multi-context environments. PhD
thesis, Kansas State University (2014)
48. Zhang, S., Caragea, D., Ou, X.: An empirical study on using the national vulnera-
bility database to predict software vulnerabilities. In: Hameurlain, A., Liddle, S.W.,
Schewe, K.-D., Zhou, X. (eds.) DEXA 2011, Part I. LNCS, vol. 6860, pp. 217–231.
Springer, Heidelberg (2011)
49. Zhang, S., Ou, X., Homer, J.: Effective network vulnerability assessment through
model abstraction. In: Holz, T., Bos, H. (eds.) DIMVA 2011. LNCS, vol. 6739,
pp. 17–34. Springer, Heidelberg (2011)
50. Zhang, S., Ou, X., Singhal, A., Homer, J.: An empirical study of a vulnerability
metric aggregation method. In: The 2011 International Conference on Security
and Management (SAM 2011), Special Track on Mission Assurance and Critical
Infrastructure Protection (STMACIP 2011) (2011)
51. Zhang, S., Zhang, X., Ou, X.: After we knew it: empirical study and modeling of
cost-effectiveness of exploiting prevalent known vulnerabilities across iaas cloud.
In: Proceedings of the 9th ACM Symposium on Information, Computer and Com-
munications Security, pp. 317–328. ACM (2014)
52. Zhao, D., Zhang, Z., Zhou, X., Li, T., Wang, K., Kimpe, D., Carns, P., Ross, R.,
Raicu, I.: Fusionfs: Toward supporting data-intensive scientific applications on
extreme-scale high-performance computing systems. In: 2014 IEEE International
Conference on Big Data (Big Data), pp. 61–70. IEEE (2014)
Assessing Attack Surface with Component-Based Package Dependency 417
53. Zheng, C., Zhu, S., Dai, S., Gu, G., Gong, X., Han, X., Zou, W.: Smartdroid:
An automatic system for revealing ui-based trigger conditions in android appli-
cations. In: Proceedings of the Second ACM Workshop on Security and Privacy
in Smartphones and Mobile Devices, SPSM 2012, pp. 93–104. ACM, New York
54. Zheng, Q., Zhu, W., Zhu, J., Zhang, X.: Improved anonymous proxy re-encryption
with cca security. In: Proceedings of the 9th ACM Symposium on Information
Computer and Communications Security, ASIA CCS 2014, pp. 249–258. ACM,
New York (2014)
55. Zhou, X., Sun, X., Sun, G., Yang, Y.: A combined static and dynamic software
birthmark based on component dependence graph. In: International Conference on
Intelligent Information Hiding and Multimedia Signal Processing, pp. 1416–1421.
IEEE (2008)
56. Zhuang, R., Zhang, S., Bardas, A., DeLoach, S.A., Ou, X., Singhal, A.: Inves-
tigating the application of moving target defenses to network security. In: 2013
6th International Symposium on Resilient Control Systems (ISRCS), pp. 162–169.
IEEE (2013)
57. Zhuang, R., Zhang, S., DeLoach, S.A., Ou, X., Singhal, A.: Simulation-based
approaches to studying effectiveness of moving-target network defense. In: National
Symposium on Moving Target Research (2012)
58. Zimmermann, T., Nagappan, N.: Predicting defects using network analysis on
dependency graphs. In: ACM/IEEE 30th International Conference on Software
Engineering, ICSE 2008, pp. 531–540. IEEE (2008)
... CICAndMal2019 collected 4,354 malware from VirusTotal [55], Contagio security blog ("http://contagiominidump.") and previous researchers [52,56]. Due to the sample errors and conflicts labels in the usage of the above datasets, CICAndMal2019 successfully installed 421 malicious applications in four malware category and 1,700 benign applications on the mobile phone and obtained its network traffic. ...
Full-text available
To accurately find malware in a large number of mobile APPs, and determine which family it belongs to is one of the most important challenges in Android malware detection. Existed research focuses on using the extracted features to distinguish Android malicious APPs, and less attention is paid to the category and family classification of Android malware. Meanwhile, feature selection has always been a choose-difficult issue in malware detection with machine learning methods. In this paper, SelAttConvLstm was designed to classify android malware by category and family without manually selecting features. To identify Android malware, we first convert all the network traffic flows into grayscale images according to chronological order through data preprocessing. Second, we design SelAttConvLstm, a deep learning model to detect malicious Android APPs with network flows images. This model can consider both the spatial and temporal features of network flow at the same time. In addition, to improve the performance of the model, self-attention weights are added to focus on different features of the input. Finally, comprehensive experiments are conducted to verify the effectiveness of the detection model. Experimental results showed that our method can not only effectively detect malware, but also classify malware in detail and accurately by category and family.
... Safeguard M e a n M e d i a n M e a n M e d i a n M e a n U / C U s a g e M e a n M e d i a n Protect production branch [85], [ Integrity check of dependencies through cryptographic hashes [9], [36], [83], [109], [131], [135], [138] 3. Maintain detailed SBOM [5], [8], [53], [183], [184] and perform SCA [8], [31], [43], [48], [51], [53], [55], [56] 4 Code signing [47], [83], [109], [135], [138], [141], [ Application Security Testing [34], [39], [41], [46], [55], [56], [58], [66], [80], [122], [134], [187] 4. Establish vetting process for Open-Source components hosted in internal/public repositories [15], [16], [32], [134], [188] 4. execution is achieved either at runtime, e.g., by embedding the payload in a specific function or initializer, or by poisoning test routines [19]. Differences also exist in regards to code obfuscation and malware detection. ...
Full-text available
The widespread dependency on open-source software makes it a fruitful target for malicious actors, as demonstrated by recurring attacks. The complexity of today's open-source supply chains results in a significant attack surface, giving attackers numerous opportunities to reach the goal of injecting malicious code into open-source artifacts that is then downloaded and executed by victims. This work proposes a general taxonomy for attacks on open-source supply chains, independent of specific programming languages or ecosystems, and covering all supply chain stages from code contributions to package distribution. Taking the form of an attack tree, it covers 107 unique vectors, linked to 94 real-world incidents, and mapped to 33 mitigating safeguards. User surveys conducted with 17 domain experts and 134 software developers positively validated the correctness, comprehensiveness and comprehensibility of the taxonomy, as well as its suitability for various use-cases. Survey participants also assessed the utility and costs of the identified safeguards, and whether they are used.
... Design consistency is crucial for both commercial and open-source software (OSS) projects. Because, apart from the DEVEM challenges, practitioners and researchers are reporting incremental performance and security risks 6 within the architectural component's inter and intra dependency relations [10,11,12,13], elevating the design concerns. In addition to risk reduction, proper design increases developer productivity (i.e., improved maintainability, portability, and flexibility) [5,14]. ...
Full-text available
In this paper, we survey state-of-the-art architectural change detection and categorization techniques and identify future research directions. To the best of our knowledge, our survey is the first comprehensive report on this area. However, in this survey, we compare available techniques using various quality attributes relevant to software architecture for different implementation levels and types. Moreover, our analysis shows that there is a lack of lightweight techniques (in terms of human intervention, algorithmic complexity, and frequency of usage) feasible to process hundreds and thousands of change revisions of a project. We also realize that rigorous focuses are required for capturing the design decision associativity of the architectural change detection techniques for practical use in the design review process. However, our survey on architectural change classification shows that existing automatic change classification techniques are not promising enough to use for real-world scenarios and reliable post analysis of causes of architectural change is not possible without manual intervention. There is also a lack of empirical data to construct an architectural change taxonomy, and further exploration in this direction would add much value to architectural change management.
... Based on the findings of the authors of [31], aggressive code refactoring might not be necessary for code clones that are volatile and that the technique of refactoring needs to be complemented by other code maintenance approaches. Prior research analyzing package dependencies [32] utilizes a systematic approach of measuring the attack surface exposed by individual vulnerabilities through component level dependency analysis. In this work, cloned cryptocurrencies do not utilize Bitcoin as a package but the concept that the attack surface is dependent on code utilized from elsewhere. ...
... Based on the findings of the authors of [31], aggressive code refactoring might not be necessary for code clones that are volatile and that the technique of refactoring needs to be complemented by other code maintenance approaches. Prior research analyzing package dependencies [32] utilizes a systematic approach of measuring the attack surface exposed by individual vulnerabilities through component level dependency analysis. In this work, cloned cryptocurrencies do not utilize Bitcoin as a package but the concept that the attack surface is dependent on code utilized from elsewhere. ...
Full-text available
Cryptocurrencies have become very popular in recent years. Thousands of new cryptocurrencies have emerged, proposing new and novel techniques that improve on Bitcoin's core innovation of the blockchain data structure and consensus mechanism. However, cryptocurrencies are a major target for cyber-attacks, as they can be sold on exchanges anonymously and most cryptocurrencies have their codebases publicly available. One particular issue is the prevalence of code clones in cryptocurrencies, which may amplify security threats. If a vulnerability is found in one cryptocurrency, it might be propagated into other cloned cryptocurrencies. In this work, we propose a systematic remedy to this problem, and we propose CoinWatch (CW). Given a reported vulnerability at the input, CW uses the code evolution analysis and a clone detection technique for indication of cryptocurrencies that might be vulnerable. We applied CW on 1094 cryptocurrencies using 4 CVEs and obtained 786 true vulnerabilities present in 384 projects, which were confirmed with developers and successfully reported as CVE extensions.
... Zhang et al. [35] proposed an approach for estimating the security risk for a software project by considering known security vulnerabilities in its dependencies, however the approach does not consider any evidence for the presence of a vulnerability. Dumitras et al. [36] discussed a risk model for managing software upgrades in enterprise sys- tems. ...
Free and Open Source Software (FOSS) components are ubiquitous in both proprietary and open source applications. Each time a vulnerability is disclosed in a FOSS component, a software vendor must decide whether to update the FOSS component, patch the application itself, or just do nothing as the vulnerability is not applicable to the old deployed version. This is particularly challenging for enterprise software vendors that consume thousands of FOSS components and offer more than a decade of support and security fixes for their applications. To address this challenge we propose a screening test: a novel, automatic method based on thin slicing, for estimating quickly whether a given vulnerability is present in a consumed FOSS component by looking across its entire repository. We show that our screening test scales to large open source projects (e.g., Apache Tomcat, Spring Framework, Jenkins) that are routinely used by large software vendors, scanning thousands of commits and hundred thousands lines of code in a matter of minutes. Further, we provide insights on the empirical probability that, on the above mentioned projects, a potentially vulnerable component might not actually be vulnerable after all.
... They have used attack graph to show how the attacker can used multiple vulnerabilities to exploit the system In our approach we have used the attack surface modeling approach for quantification of the vulnerability space of the Honeypot system. As mentioned in [8][11] [12] the complicated structure of the software packages adds one more level of difficulty in mapping the attack surface. Some softwares are not directly exploitable but the libraries and the packages they use make them vulnerable to the vulnerabilities of the shared libraries. ...
Full-text available
Honeypots are the network sensors used for capturing the network attacks. As these sensors are solely deployed for the purpose of being attacked and compromised hence they have to be closely monitored and controlled. In the work presented in this paper the authors have addressed the problem of base-lining the high-interaction Honeypots by proposing a structured framework for base-lining any high interaction Honeypot. The Honeypot base-lining process involves identification and white-listing of all the legitimate system activities and the modeling of Honeypot attack surface. The outcome of the Honeypot base-lining process is an XML file which models the Honeypot attack surface. The authors claim that this Honeypot system modeling is useful at the time of attack data analysis, as it enables the mapping of captured attacks to the vulnerabilities exposed by the Honeypot. This attack to vulnerability mapping capability helps defenders to find out what attacks targets what vulnerabilities and could also leads to the detection of the zero day vulnerabilities exploit attempt.
... They have used attack graph to show how the attacker can used multiple vulnerabilities to exploit the system In our approach we have used the attack surface modeling approach for quantification of the vulnerability space of the Honeypot system. As mentioned in [8][11] [12] the complicated structure of the software packages adds one more level of difficulty in mapping the attack surface. Some softwares are not directly exploitable but the libraries and the packages they use make them vulnerable to the vulnerabilities of the shared libraries. ...
Conference Paper
Full-text available
Honeypots are the network sensors used for capturing network attacks. As these sensors are solely deployed for the purpose of being attacked and compromised hence they have to be closely monitored and needs to be in a controlled environment. In the work presented in this paper we have addressed the problem of baselining the high-interaction Honeypots. The environment baselining process involves identification and whitelisting of legitimate system level activities and modeling of Honeypot attack surface. We have proposed a structured framework for baselining any high interaction Honeypot. The outcome of the Honeypot baselining process is an xml file which models the Honeypot environment, its capabilities and attack surface. We claim that this Honeypot system modeling is useful at the time of attack analysis, as it enables the mapping of captured attacks with the vulnerabilities exposed by the Honeypot. This attack to vulnerability mapping leads to the detection of the zero day vulnerabilities.
Full-text available
Access control refers to that controls objects' ability to connect via the Law on Authorization. An important requirement of any computer system is to protect its data and resources against unauthorized disclosure (secrecy) and unauthorized or illegal alteration (integrity), while at the same time ensuring that it is available to legitimate users (no denials of service), attempting to limit access to digital resources is one of the main problems found in Secure Computers. This review presents an overview of access control general concepts, principles. In addition, the access control functions which provide protection to the information and resources of the system, and specify access control models such as traditional models and other models that used in the modern domains of (Internet, network, Cloud computing, mobile applications and operating system) to clarify its benefits and disadvantages, if any and Demonstrate how traditional models are used for access control management in modern models.
Conference Paper
Existing security technologies play a significant role in protecting enterprise systems but they are no longer enough on their own given the number of successful cyberattacks against businesses and the sophistication of the tactics used by attackers to bypass the security defences. Security measurement is different to security monitoring in the sense that it provides a means to quantify the security of the systems while security monitoring helps in identifying abnormal events and does not measure the actual state of an infrastructure's security. The goal of enterprise security metrics is to enable understanding of the overall security using measurements to guide decision making. In this paper we present a reference architecture for aggregating the measurement values from the different components of the system in order to enable stakeholders to see the overall security state of their enterprise systems and to assist with decision making. This will provide a newer dimension to security management by shifting from security monitoring to security measurement.
Conference Paper
Full-text available
Scientific computational experiments often span multiple computational and analytical steps, and during execution, researchers need to store, access, transfer, and query information. Scientific workflow is a powerful tool to streamline and organize scientific application. Numbers of tools have been developed to help build scientific workflows, they provide mechanisms for creating workflow but lack a native scheduling system for determining where code should be executed. This paper presents Emerald, a system that adds sophisticated computation offloading capabilities to scientific workflows. Emerald automatically offloads computation intensive steps of scientific workflow to the cloud in order to enhance workflow performance. Emerald minimizes the burden on developers to build workflows with computation offloading ability by providing easy-to-use API. Evaluation showed that Emerald can effectively reduce up to 55% of execution time for scientific applications.
Conference Paper
Full-text available
The need for increased performance of mobile device directly conflicts with the desire for longer battery life. Offloading computation to resourceful servers is an effective method to reduce energy consumption and enhance performance for mobile applications. Today, most mobile devices have fast wireless link such as 4G and Wi-Fi, making cloud platform a good destination for computation offloading. Android provides mechanisms for creating mobile applications but lacks a native scheduling system for determining where code should be executed. This paper presents Jade, a system that adds sophisticated energy-aware computation offloading capabilities to Android applications. Jade monitors device and application status and automatically decides where code should be executed. Jade dynamically adjusts offloading strategy by adapting to workload variation, communication costs, and device status. Jade minimizes the burden on developers to build applications with computation offloading ability by providing easy-to-use Jade API. Evaluation shows that Jade can effectively reduce up to 35% of average power consumption for mobile device while improving application performance.
Conference Paper
Full-text available
Data driven programming models like MapReduce have gained the popularity in large-scale data processing. Although great efforts through the Hadoop implementation and framework decoupling (e.g. YARN, Mesos) have allowed Hadoop to scale to tens of thousands of commodity cluster processors, the centralized designs of the resource manager, task scheduler and metadata management of HDFS file system adversely affect Hadoop's scalability to tomorrow's extreme-scale data centers. This paper aims to address the YARN scaling issues through a distributed task execution framework, MATRIX, which was originally designed to schedule the executions of data-intensive scientific applications of many-task computing on supercomputers. We propose to leverage the distributed design wisdoms of MATRIX to schedule arbitrary data processing applications in cloud. We compare MATRIX with YARN in processing typical Hadoop workloads, such as WordCount, TeraSort, Grep and RandomWriter, and the Ligand application in Bioinformatics on the Amazon Cloud. Experimental results show that MATRIX outperforms YARN by 1.27X for the typical workloads, and by 2.04X for the real application. We also run and simulate MATRIX with fine-grained sub-second workloads. With the simulation results giving the efficiency of 86.8% at 64K cores for the 150ms workload, we show that MATRIX has the potential to enable Hadoop to scale to extreme-scale data centers for fine-grained workloads.
Full-text available
State-of-the-art, yet decades-old, architecture of high-performance computing systems has its compute and storage resources separated. It thus is limited for modern data-intensive scientific applications because every I/O needs to be transferred via the network between the compute and storage resources. In this paper we propose an architecture that hss a distributed storage layer local to the compute nodes. This layer is responsible for most of the I/O operations and saves extreme amounts of data movement between compute and storage resources. We have designed and implemented a system prototype of this architecture - which we call the FusionFS distributed file system - to support metadata-intensive and write-intensive operations, both of which are critical to the I/O performance of scientific applications. FusionFS has been deployed and evaluated on up to 16K compute nodes of an IBM Blue Gene/P supercomputer, showing more than an order of magnitude performance improvement over other popular file systems such as GPFS, PVFS, and HDFS.
Conference Paper
Full-text available
One way to efficiently utilize the coming exascale machines is to support a mixture of applications in various domains, such as traditional large-scale HPC, the ensemble runs, and the fine-grained many-task computing (MTC). Delivering high performance in resource allocation, scheduling and launching for all types of jobs has driven us to develop Slurm++, a distributed workload manager directly extended from the Slurm centralized production system. Slurm++ employs multiple controllers with each one managing a partition of compute nodes and participating in resource allocation through resource balancing techniques. In this paper, we propose a monitoring-based weakly consistent resource stealing technique to achieve resource balancing in distributed HPC job launch, and implement the technique in Slurm++. We compare Slurm++ with Slurm using micro-benchmark workloads with different job sizes. Slurm++ showed 10X faster than Slurm in allocating resources and launching jobs – we expect the performance gap to grow as the job sizes and system scales increase in future high-end computing systems.
Conference Paper
Full-text available
Network virtualization enables programmability to the substrate network provider who provisions and manages virtual networks (VNs) for service providers. A mix of software-defined and autonomic technology improves the flexibility of network management, including dynamic reconfiguration in the virtualized networking environment (VNE). Virtual router (VR)s run at a logical level where software failures may be more frequent. Thus, a VR failure is more frequent than a physical router failure on the substrate network. In this paper, we present a software-defined resilient virtualized networking environment where a VN topology can be restored by using a preserved standby virtual router (S-VR) after a VR failure. We illustrate a preliminary autonomic setup of a VNE on the GENI testbed.
Conference Paper
Full-text available
In recent years, many Array DBMSs, including SciDB and Ras-DaMan have emerged to meet the needs of data management applications where the natural structures are the arrays. These systems , like their relational counterparts, involve an expensive data ingestion phase. The paradigm of using native storage as a DB and providing database-like support (e.g., the NoDB approach) has recently been shown to be an effective approach for dealing with infrequently queried data, where data ingestion costs cannot be justified , though only in context of relational data. Applications that generate massive arrays, such as the scientific simulations, often store the data in one of a small number of array storage formats, like NetCDF or HDF5. Thus, a natural question is, " can database-like functionality be supported over native array storage? ". In this paper, we present algorithms, different partitioning strategies, and an analytical model for supporting structural (grid, sliding, hierarchical, and circular) aggregations over native array storage, and describe implementation of this approach in a system we refer to as Structural AGgregations over Array storage (SAGA). We show how the relative performance of different partitioning strategies changes with varying amount of computation in the aggregation function and different levels of data skew, and our model is effective in choosing the best partitioning strategy. Performance comparison with SciDB shows that despite working on native array storage, the aggregation costs with our system are lower. Finally, we also show that our structural aggregation implementations achieve high parallel efficiency.
This chapter presents the design and initial simulation results for a prototype moving-target defense (MTD) system, whose goal is to significantly increase the difficulty of attacks on enterprise networks. Most networks are static, which gives attacker’s a great advantage. Services are run on well-known ports at fixed, easily identifiable IP addresses. The goal of an MTD system is to eliminate the static nature of networks by continuously adapting their configuration over time in ways that seems random or chaotic to attackers, thus negating their advantage. The novelty of our approach lies in the use of runtime models that explicitly capture a network’s operational and security goals, the functionality required to achieve those goals, and the configuration of the system. The MTD system reasons over these models to determine how to make changes to the system that are invisible to users but appear chaotic to an attacker. Our system uses these runtime models to analyze both known and unknown vulnerabilities to ensure that adaptations occur often enough and in the right ways to protect the system against external attacks.
We propose a new approach to conduct static analysis for security vetting of Android apps, and built a general framework, called Amandroid for determining points-to information for all objects in an Android app in a flow and context-sensitive way across Android apps components. We show that: (a) this type of comprehensive analysis is completely feasible in terms of computing resources needed with modern hardware, (b) one can easily leverage the results from this general analysis to build various types of specialized security analyses-in many cases the amount of additional coding needed is around 100 lines of code, and (c) the result of those specialized analyses leveraging Amandroid is at least on par and often exceeds prior works designed for the specific problems, which we demonstrate by comparing Amandroid's results with those of prior works whenever we can obtain the executable of those tools. Since Amandroid's analysis directly handles inter-component control and data ows, it can be used to address security problems that result from interactions among multiple components from either the same or different apps. Amandroid's analysis is sound in that it can provide assurance of the absence of the specified security problems in an app with well-specified and reasonable assumptions on Android runtime system and its library.