Content uploaded by A. Benjamin Hocking
Author content
All content in this area was uploaded by A. Benjamin Hocking on Oct 24, 2020
Content may be subject to copyright.
1
TDABD: Test-Driven-Assurance-Based Development
Jonathan C. Rowanhill
Dependable Com
Charlottesville, VA, USA
jonathan.rowanhill@dependablecomputing.com
Ashlie B. Hocking
Dependable Computing
Charlottesville, VA, USA
ben.hocking@dependablecomputing.com
William Hawkins
Dependable Computing
Charlottesville, VA, USA
will.hawkins@dependablecomputing.com
Abstract— To address the problem of assuring complex
modern systems, we propose assurance driven development where
the inferences of assurance are themselves directly tested. We
refer to this as test-driven-assurance-based development, or
TDABD. TDABD focuses development on continuously testable
argument reasoning with incremental and improving delivery of
improving assurance targets.
Keywords— assurance-driven development, test-driven
development, assurance argument, assurance case, assurance
confidence
I. INTRODUCTION
Assurance-based development (ABD) is applied as a system
is being developed in order to intentionally direct the system’s
development towards a more robustly assured design and
implementation [1]. A weakness of this approach, however, lies
in the propensity of developers to rely on untested reasoning and
inferences in the system’s assurance arguments.
This work proposes a technique of iterative development of
a system’s assurance argument that uses direct testing of the
argument’s inferences in order to reduce the argument’s reliance
on such untested reasoning. The technique, test-driven-
assurance-based development (TDABD), parallels test-driven
development (TDD). Just as TDD supports testing of claims
about a system, TDABD supports testing the inferences drawn
from claims.
The authors hypothesize that TDABD would improve ABD
through iterators of ABD against more strongly founded
argument. Furthermore, TDABD would promote system
“assurance releases” of a system’s assurance argument as self-
contained releases wholly independent of functionality changes.
The remainder of this work introduces the TDABD model
and considers what would be required of argument and tooling
to support that approach.
II. THE DEVELOPMENT MODEL
The goal of TDABD is to produce living arguments that can
be developed, assessed, and improved iteratively as part of a
system’s development and maintenance lifecycle. In practice,
TDABD follows a spiral model [2] with
• Early and continuous testing of assurance logic for a
test-driven approach to argument development,
• Releases of improved assurance arguments at the
same priority and role as feature and bug-fix releases of
the functional system.
These two aspects of the approach are discussed in the
remainder of the section.
A. Instrumented Argument
TDABD is based on the model of the instrumented
argument. An instrumented argument is an explicit assurance
argument in a language such as SACM [3], GSN [4], or CAE [5]
that, regardless of what reasoning is applied, maximally asserts
each claim directly with evidence. Intuitively, an instrumented
argument ‘double checks’ claims using direct evidence, much as
might be done in a V-model. Each piece of evidence directly
associated with a claim could be thought of as a ‘unit test’ of that
claim independent of the claim’s supporting argument.
An instrumented assurance argument is a living model that
can always be computed against a system, no matter its current
state of development. This state of the system is collected under
temporal substantiation regimes that define when evidence is
available to test argument inferences, ranging from design-time
through operations-time evidence collection.
The approach of TDABD is to develop and maintain
instrumented assurance throughout a system’s lifecycle (e.g.,
design, implementation, deployment, and maintenance).
Multiple substantiation regimes can be applied to the same
assurance claims. For example, early simulation in the design
phase can be augmented with system log data during the
operational phase such that both test the same argument logic.
B. Test-Driven Sufficiency
TDABD continuously tests the system’s assurance argument
throughout its lifecycle using Bayesian analysis [6][7]. Given a
consequent property claim C and a set of antecedent subclaims
SC, the TDABD practitioner shows experimentally that the
antecedent holds for the consequent over a common state
variation model of the claim targets.
The above approach applies to both deductive and inductive
inferences. For deductive reasoning, rules for testing inferences
are well-defined for the logical operators and can be applied
automatically. For inductive reasoning, a TDABD users might
model whether a given inference is supported or undermined by
various truth combinations of a consequent and its antecedents
and then apply a Baconian probability model [8].
The goal of TDABD is to continuously and maximally
instrument assurance arguments to actively test all such logical
steps. Furthermore, once deployed, this assurance argument
testing can continue to be performed to detect assurance
divergence due to changes in the system’s design or
unanticipated changes to the system and its operating
environments.
C. Development Process
TDABD takes place in two phases of system development.
1) Phase 1: Assurance Development
2
Phase 1 consists of system development in a spiral model
[2]. A developing system’s assurance arguments are initially
notional and amorphous. They consist of a hierarchy of claims
that are only weakly connected through abstract reasoning.
Each claim of the system’s notional assurance arguments is
instrumented with evidence for testing so that as developers
attempt to assemble the notional argument into more concrete
logical compositions, these proposed compositions can be
directly tested using the previously described technique.
Arguments iteratively mature through
• Argument Concretization: The hierarchy of properties
and their linking through logical relationships concretize
into inductive, and where possible [9], deductive
argument.
• Substantiation Breadth: The range of temporal
substantiation regimes for instrumented argument
increases in breadth with argument maturity.
• Confidence Improvement: The extent and surety with
which the consequent claims of an argument follow from
antecedents under instrumented testing.
Phase 1 is complete when assurance stakeholders accept
measured concretization, confidence, and substantiation.
2) Phase 2: Assurance Releases
Phase 2 assurance occurs after initial system release.
TDABD introduces assurance releases specifically for the
purpose of incremental improvement of assurance over time.
Assurance releases are made in response to:
• Continued assessment of instrumented confidence
measurement: During use of a system, the instrumented
assurance argument will continue to produce
substantiation data that impacts published confidence.
• Improvement of assurance argument: Arguments can be
improved as more expensive assurance methods (e.g.
formal methods) reach fruition, inductive reasoning is
replaced with more extensive deduction, and as
erroneous and fragile argument is detected and corrected.
This process continues as a system’s assurance arguments
mature according to the three measures introduced above.
III. IMPACT ON ARGUMENT FORM
TDABD might impact the form of assurance arguments as
follows:
• TDABD favors logical specificity in claims and
deductive reasoning for analysis purposes.
• TDABD encourages development of claims prior to
inferencing assembly. Arguments might be weakly
assembled in early stages of a project.
• TDABD assumes arguments are living analysis products
that are instrumented to assess and improve validity and
strength throughout a system’s lifecycle.
IV. THE NEED FOR TOOLING
TDABD requires timely and precise assurance development
that could be met through the following emerging assurance
technologies:
• System artifact-based argument generation, such as
generation of argument from system models [10][11].
• Comprehensive, semi-formal assurance languages, such
as SACM, that can represent deductive and inductive
inference across well-defined property semantics backed
by structurally organized evidence.
• Tools for “programming” semantically encoded
argument, such that engineers can quickly author and test
living arguments under TDABD.
• Live argument confidence assessment such that
argument strength can be rapidly analyzed and re-
assessed under each substantiation regime for a system.
The last two are current areas of research by the authors. The
goal is to develop tools allowing TDABD for robust assurance
of modern systems.
ACKNOWLEDGMENT
This work was funded by USAF AFRL/RQQA contract
FA8650-17-F-2220. Approved for Public Release: Distribution
Unlimited (Case Number: 88ABW-2020-2416).
REFERENCES
[1] P. Graydon, J.C. Knight, E.A. Strunk. "Assurance based development of
critical systems." 37th Annual IEEE/IFIP International Conference on
Dependable Systems and Networks (DSN'07). IEEE, 2007.
[2] B.W. Boehm. "A spiral model of software development and
enhancement." Computer 21.5 (1988): 61-72.
[3] SACM, OMG. "Structured Assurance Case Meta-model V2.1." (2020).
[4] T. Kelly, R. Weaver. "The goal structuring notation–a safety argument
notation." Proceedings of the dependable systems and networks 2004
workshop on assurance cases. Citeseer, 2004.
[5] L. Emmet, G. Cleland. "Graphical notations, narratives and persuasion: a
pliant systems approach to hypertext tool design." Proceedings of the
thirteenth ACM conference on Hypertext and hypermedia. 2002.
[6] W. Wu, T. Kelly. "Combining bayesian belief networks and the goal
structuring notation to support architectural reasoning about
safety." International Conference on Computer Safety, Reliability, and
Security. Springer, Berlin, Heidelberg, 2007.
[7] E. Denney, G. Pai, I. Habli. "Towards measurement of confidence in
safety cases." 2011 International Symposium on Empirical Software
Engineering and Measurement. IEEE, 2011.
[8] C. Weinstock, J. Goodenough, and A. Klein. "Measuring assurance case
confidence using Baconian probabilities." 2013 1st International
Workshop on Assurance Cases for Software-Intensive Systems
(ASSURE). IEEE, 2013.
[9] R. Bloomfield, J. Rushby. "Assurance 2.0: A Manifesto." arXiv preprint
arXiv:2004.10474 (2020).
[10] A. Gacek, J. Backes, D. Cofer, K. Slind, and M. Whalen. "Resolute: an
assurance case language for architecture models." ACM SIGAda Ada
Letters 34.3 (2014): 19-28
[11] E. Denney, G. Pai, and J. Pohl. "AdvoCATE: An assurance case
automation toolset." International Conference on Computer Safety,
Reliability, and Security. Springer, Berlin, Heidelberg, 2012.