ArticlePDF Available

Guidelines for creating a formal verification testplan

Authors:

Abstract

In this paper, we propose a systematic set of guidelines for creating an effective formal verification testplan, which consists of an English list of comprehensive requirements that capture the desired functionality of the blocks we intend to formally verify. We demonstrate our formal verification testplanning techniques on a real example that involves an AMBA™ AHB parallel to Inter IC (or I 2 C) serial bus bridge.
Guidelines for creating a formal verification testplan
Harry Foster
Mentor Grahics, Inc.
San Jose, CA
harry_foster@mentor.com
Lawrence Loh
Jasper Design Automation
Mountain View, CA
lawrence@jasper-da.com
Bahman Rabii
Google, Inc.
Mountain View, CA
bahman@unmaker.com
Vigyan Singhal
Oski Technology, Inc.
Fremont, CA
vigyan@oskitech.com
ABSTRACT
In this paper, we propose a systematic set of guidelines for
creating an effective formal verification testplan, which consists
of an English list of comprehensive requirements that capture
the desired functionality of the blocks we intend to formally
verify. We demonstrate our formal verification testplanning
techniques on a real example that involves an AMBA™ AHB
parallel to Inter IC (or I
2
C) serial bus bridge.
Keywords
Assertion, Formal verification, High-Level Requirement,
Specification, Verification testplan.
1. INTRODUCTION
Successful verification is not ad hoc in nature. On the contrary,
experience repeatedly demonstrates that success depends on
methodical verification planning combined with systematic
verification processes. The key to success is the verification
testplan.
With the emergence of assertion and property language
standards such as the IEEE Property Specification Language
(PSL) [4] and SystemVerilog Assertions (SVA) [5], design
teams are investigating formal verification and finding that it
should be a key component of their verification flow. Yet there
is a huge disconnect between attempting to prove an ad hoc set
of assertions and implementing an effective verification flow
that includes formal. The greatest return-on-investment (ROI)
for integrating formal into the flow is not achieved by proving
only an ad hoc set of assertions—it also involves proving
blocks. For success, this approach requires you to create a
comprehensive formal verification testplan. Most design teams,
however, lack expertise and guidelines on how to methodically
and systematically create an effective testplan. Furthermore, the
industry lacks literature on effective formal verification
testplanning techniques.
In this paper, we propose an integrated verification process that
includes formal verification as a key component. We begin by
introducing a systematic set of guidelines for creating an
effective formal verification testplan, which consists of an
English list of comprehensive requirements that capture the
desired functionality of the blocks you intend to formally verify.
One benefit the formal verification testplan approach provides is
a direct means to measure progress throughout the verification
process by tracking the English list of proved requirements.
Finally, we demonstrate formal verification testplanning
techniques on a real example that involves an AMBA AHB
parallel to Inter IC (I
2
C) serial bus bridge. We discuss
techniques such as hierarchical property partitioning
considerations and constraint specification in the context of this
real example. We chose this real example to illustrate the key
point that verification completeness for this bridge involves
more than proving a set of simple assertions (for example, the
bridge’s FIFO will not overflow). In addition, verification
completeness involves more than proving the bridge’s correct
interface behavior (for example, the bridge interface is AHB
compliant). Completeness requires a systematic process that
ensures all key features described in the architectural and micro-
architectural specification are identified and covered in the
verification testplan prior to writing any assertions.
2. TESTPLAN GUIDELINES
In this section, we discuss the strategies and techniques that will
help you create effective formal verification testplans.
2.1 Where to apply formal
Formal verification can often be a resource-intensive endeavor.
The first step in developing a formal testplan is to identify
which blocks will get a higher ROI from the use of formal
verification, and which blocks can be more reliably tested with
simulation (directed and random). The discussion in this section
will build your background and help you make those decisions.
Complexity of formal verification. Formal verification of
properties (that is, assertions or requirements) on RTL designs is
a known hard problem: the complexity of all known algorithms
for formal verification (a.k.a. model checking) is exponential in
the size of the designs [1, 2]. Thus, any naïve application of
formal verification is likely to cause state-space explosion and
impractical computer run-times. One coarse measure of
prediction of the tractability of formal verification is the number
of state-holding elements (often flip-flops) in the cone of
influence of the property (see Figure 1). However, as we will
see later in paper, for some classes of designs, this number can
sometimes be misleading because reduction techniques (based
on the requirements and the design) can dramatically reduce this
number.
It is imperative that the user prioritize the application of formal
verification by choosing design blocks that fall in the sweet spot
of formal verification and are amenable to all possible reduction
techniques, such as design reduction, abstraction, and
compositional reasoning (as discussed further in Section 2.2).
Irrelevant
Logic
Cone of
Influence
Design
Block
Property
Figure 1. Cone of influence
Sequential vs. concurrent designs. A key determining factor for
choosing designs suitable for formal is whether a design or
block is mostly sequential (that is, non-concurrent) or mostly
concurrent.
Sequential blocks typically operate a single stream of input data,
even though there might be multiple packets at various stages of
the design pipeline at any instant. An example of such
sequential behavior is an instruction decode unit that decodes a
processor instruction over many stages. Another example is an
MPEG encoder block that encodes a stream of data, possibly
over many pipeline stages. A floating point arithmetic unit is yet
another example. Often, you can describe the behavior of a
sequential hardware block in pseudo-code in a software
language, such as C or SystemC. In the absence of any
additional concurrent events that can interfere with the
sequential computation, you can adequately test blocks such as
these with simulation, often validating against a C reference
model. Formal verification, on the other hand, usually
encounters state-explosion for sequential designs because most
interesting end-to-end properties typically involve most flops of
these flop-intensive designs.
Concurrent designs deal with multiple streams of input data that
collide with each other. An example of such a block is a token
generator that is serving multiple requesting agents and
concurrently handling returns of tokens from other returning
agents. Another example is an arbiter, especially when it deals
with complex priority schemes. Both of the previous examples
have mostly control flops in the cone-of-influence. An example
of a concurrent design that is more datapath-intensive is a
switch core that negotiates traffic of packets going from
multiple ingress ports to multiple egress ports. While the cone-
of-influence of such a design can have a large number of flops,
especially if the datapath is very wide, a clever use of
decomposition can verify correctness of one datapath bit at a
time. This process of decomposition (covered more in Section
2.3) effectively reduces the mostly datapath problem to a mostly
control problem.
Control vs. data transport vs. data transform blocks. Given the
discussion above, the following coarse characterization can
often help you determine whether formal is suitable. You can
usually characterize design blocks as control or datapath
oriented. You can further characterize datapath design blocks as
either data transport or data transform. Data transport blocks
essentially transport packets that are generally unchanged from
multiple input sources to multiple output sources, for example, a
PCI Express Data Link Layer block. Data transform blocks
perform a mathematical computation (an algorithm) over
different inputs, for example, an IFFT convolution block (see
Figure 2).
What makes data transport blocks amenable to formal is the
independence of the bits in the datapath, often making the
formal verification independent of the width of the datapath.
Unfortunately, this kind of decomposition is usually not possible
in data transform blocks. The next section lists examples of
blocks that are more suited for formal than others.
Design
Verification
Data Transport Data Transform
Control Datapath
Figure 2. Data verification flow
Blocks suitable for formal verification. As discussed, formal
verification is particularly effective for control logic and data
transport blocks containing high concurrency (illustrated in
Figure 3).
Figure 3. Concurrent paths
The following list includes examples of blocks ideally suited for
formal verification:
Arbiters of many different kinds
On-chip bus bridge
Power management unit
DMA controller
Host bus interface unit
Scheduler, implementing multiple virtual channels for
QoS
Clock disable unit (for mobile applications)
Interrupt controller
Memory controller
Token generator
Credit manager block
Standard interface (for example, PCI Express)
Proprietary interfaces
An example of a bug identified using formal verification on a
block involving concurrent paths is as follows:
During the first three cycles of a “transaction start” from
one side of the interface, a second “transaction start”
unexpectedly came in on the other side of the interface
and changed the configuration register. The processing of
the first transaction was confused by sampling of different
configuration values and resulted in a serious violation of
the PCI protocol and caused the bus to hang.
Concurrent blocks have many of these obscure, timing based
scenarios for which formal is well suited.
Blocks not suitable for formal verification. In contrast, design
blocks that generally do not lend themselves to formal
verification tend to be sequential in nature (that is, a single-
stream of data) and potentially involve some type of data
transformation (see Figure 4).
Figure 4. Non-concurrent paths
Examples of designs that perform mathematical functions or
involve some type of data transformation include:
Floating point unit
Graphics shading unit
Convolution unit in a DSP chip
MPEG decoder
An example of a bug associated with this class of design
includes the following:
The IFFT block output result is incorrect if both inputs
are negative.
2.2 Formal testplan process
In discussing the process of defining a formal testplan, it is
helpful to briefly introduce some general concepts of block-
level formal verification. These introductions are necessarily
brief; for additional information refer to Perry and Foster [9], for
example.
What vs. how. There are two key differences between creating
formal and simulation testplans: the strict separation of checks
(observability) and input scenarios (stimulus), and the
preference of a more general specification style. Unlike
simulation, in which checkers and stimulus can be tightly
coupled, formal properties are defined in terms of generic
behavior, independent of particular input scenarios. Also unlike
simulation, formal properties are defined in terms of the
minimal correctness criteria. You should avoid cycle accurate
behavioral models whenever possible.
In certain cases, such as data integrity (see the example in
Section 3), the generic nature of formal checks might first
appear to require a great deal of scoreboarding state when
modeling the requirements. However, effective formal tools
should include formal friendly abstractions for these types of
properties, thus requiring small numbers of state elements.
Compositional reasoning. Compositional reasoning is the
process of reducing an analysis of a larger concurrent system to
reasoning about its individual functional pieces [8]. This
technique is effective for managing proof complexity and state
explosion during a formal proof. Compositional reasoning
transfers the burden of proof from the global component to the
local functional component level so that global properties can be
inferred from independently verified functional component
properties.
One of the main compositional reasoning techniques we
successfully use to prove complex designs is referred to as
assume-guarantee. This technique calls for you to prove
properties on a decomposed block using a set of assumptions
about another neighboring block, and then prove these
assumptions separately on the neighboring block, as illustrated
in Figure 5.
Block X Block Y
A
B
A
B
assert always !(A & B);
assume always !(A & B);
assert . . .
Figure 5. Assume-guarantee
Another example of compositional reasoning is formal
abstraction, as illustrated in Figure 6. In this case, you prove
properties on a subsection of the formal analysis block. Then the
driving logic for this subsection is abstracted, that is, the design
logic is ignored in favor of the proved properties. This results in
a generalization of the design behavior that simplifies the formal
analysis. The key point of this technique is that if a property
holds on the formal abstractions (the generalization), then it
holds on the entire cone of influence (the actual design logic).
However, if a property fails on the formal abstraction, then it
might be necessary to include additional logic into a larger
analysis region, forming a new abstraction that eliminates the
false negative. A detailed discussion of formal abstractions is
beyond the scope of this paper (for additional details, see [8,9]).
Irrelevant
Logic
Cone of
Influence
Design
Block
Property
Analysis
Region
Free Variables
Abstracted Inputs
Figure 6. Formal abstraction
2.2.1 Formal testplan elements
The formal testplan for a design block consists of three
components. The first is a set of properties for verification
known as requirements or assertions
1
. In addition, most designs
need legal inputs, which are expressed formally in terms of
formal properties defined on design inputs and known as
constraints or assumptions. Finally, certain test plans might use
formal coverage targets. Section 2.4 discusses the meaning of
coverage in formal verification.
2.2.1.1 Requirements
The first component of the formal testplan is a set of formal
requirements. Formal requirements express design behavior to
be proved. These are analogous with checkers in simulation
environments. End-to-end requirements are assertions
expressing the required core behavior of the design, usually
across multiple interfaces. Examples of end-to-end requirements
are that data are not dropped and that arbitration requirements
are satisfied. End-to-end properties should be expressed purely
in terms of block interface signals. Interface requirements
express the protocol rules expected by neighbor blocks and are
expressed purely over a single interface. In general, interface
requirements on a block are identical to the input assumptions
on the neighboring block on that interface. This is the
“guarantee” portion of the assume-guarantee relationship
between neighboring blocks.
We sometimes collectively refer to end-to-end requirements and
interface requirements as high-level requirements. A set of high-
level requirements can express the full specified behavior of the
block under verification. However, it might be useful to include
a number of assertions related to internal implementation-
specific features, that is, local assertions. These internal
1
While there is strictly no difference between these terms,
“assertions” is often used specifically to refer to highly
localized, implementation-specific properties. For this reason,
we favor the term “requirements” for more general use.
properties provide substantial benefits in terms of defect
localization and might require relatively little effort to define
and verify. Local assertions have been the traditional application
for functional formal verification, and we do not discuss them in
detail in this paper [10,11].
2
2.2.1.2 Assumptions
The second component of the formal testplan for a design block
ons. The best
er than constraining block
nstrain
al testplan relates to coverage,
ties is one part of a formal
is a set of input assumptions. These are formal properties that
are generally defined using the same language and semantics as
formal requirements. This similarity is essential to the assume-
guarantee methodology. Assumptions are necessary to prevent
illegal input stimuli from causing spurious property violations.
Conversely, incorrect assumptions over-restrict the input stimuli
and hide real property violations. Conceptually, over-
constraining a proof is similar to running simulation checks with
poor functional coverage. In practice however, the situation is
different in that it is difficult to measure the effects of over-
constrained inputs and nearly impossible to predict them.
Tracking and validating assumptions is possibly the most
important and subtle part of creating an effective formal
testplan. It is often easier to manage assumptions when you use
a hierarchical approach to testplan development.
You must explicitly state all formal assumpti
option is to use assume-guarantee, that is, formally verify each
assumption as a requirement on a neighboring design block.
Though this option is ideal, in some cases it is not practical for
formal verification. As an alternative, you can sometimes
validate assumptions from well-specified interface rules, as is
the case for a standard interface. If neither of these approaches
is practical, you should use assumptions as assertions in higher-
level simulations. Most importantly, all assumptions must be
treated explicitly. It is a reasonable expectation for formal tools
to provide bookkeeping mechanisms to help track the validity of
assumptions. In addition, tools may provide methods for
visually sanity testing assumptions.
Assumptions have applications oth
inputs. One example is mode setting through mode-related input
signal or configuration registers. You will not validate these
assumptions in the same sense as interface assumptions.
Yet another use for assumptions is to deliberately over-co
design behavior in preliminary verification.
2.2.1.3 Coverage targets
The third component of the form
specifically, formal coverage targets. Section 2.3 discusses
coverage concepts further. In particular, formal coverage
properties are a useful test for over-constraining input
assumptions.
2.2.2 Verification strategy
A complete set of formal proper
testplan; a staged implementation plan is the other part. In
particular, when formally verifying a design under active
development, organize properties into functional categories and
ents under different
res something more
rmal testplans for large blocks hierarchically,
two-tiered testplan, you will target certain properties
lan guideline discussion, we must address
erage is a measure of the
coverage.
2
While the completeness of the formal requirement set, as with
any set of checks, cannot be guaranteed analytically, a method
has been proposed for tools to provide quantifiable guidance
(see [5]).
develop a set of increasingly over-constrained assumptions that
represent different levels of functional completeness.
Verification begins with those properties representing the most
basic functionality of the block under the greatest restriction and
proceeds to full functionality with no over-constraint. Section
3.1.6 details an example of this approach.
A graduated strategy for proving requirem
levels of restrictions can also be valuable for tracking the
progress of formal verification. This is another area in which
formal tools can offer useful bookkeeping features.
2.2.3 Hierarchical testplanning
In reality, a formal testplan requi
complicated than a flat list of formal properties for a design
block. In general, the ideal block size for formal analysis is not
known during the planning stage. In addition, you might target
portions of a block or cluster for formal verification even when
the block as a whole is not optimal for formal verification. In
this case, selecting properties is best viewed from the level of
the larger block.
You will create fo
regardless of whether you intend to verify them with formal
alone or with a mix of formal and simulation. Initially, you will
define the upper-level testplan, which consists of requirements,
assumptions, and coverage targets, as if you intend to run the
formal analysis at the top level. Then define testplans for each
subblock in reference to the top-level testplan and map each top-
level requirement to one or more subblock requirements.
Finally, derive subblock assumptions from top-level
assumptions and assume-guarantee relationships between
subblocks.
Within this
for formal verification. If you formally prove the entire block,
no simulation is required at this level. In many cases this
approach will not be practical, particularly for design
organizations that are relatively new to formal verification. If
you use simulation for the higher-level block, a clearly
organized hierarchical formal verification strategy provides
valuable guidance about what simulation checkers to create and
what portions of the design you should target with input vectors
and monitor with functional coverage points.
2.3 Coverage
To conclude our testp
the concept of coverage. In a traditional simulation verification
environment, there are two aspects of coverage you must assess
throughout the project to determine the quality of the
verification process: input space coverage and requirement
coverage. In this section, we describe how these aspects of
coverage relate to formal verification.
Input space coverage. Input space cov
quality of the input vectors to activate (or exercise) portions of a
design. Typically, you can achieve high input space coverage
(which is evaluated by metrics such as line coverage or
functional coverage) by enumerating various scenarios and
creating directed simulation tests to exercise these scenarios.
Since it is impossible to enumerate all possible corner-case
scenarios for simulation, we generally apply constraint-driven
random input stimulus generation techniques to boost simulation
Formal verification, unlike simulation, does not depend on
enumeratin
g corner-case scenarios and then generating input
ement coverage (often
property set covers all requirements
lex
specification (and your simulation testplan) to ensure that your
advantage of
our English requirements
requirements that
must be verified.
nsidering various end-to-end scenarios) and
5.
verification engineers).
testp
throu This benefit is easily
ntroduced in
Section 2 on a real bridge example.
different protocols)
ms:
by
hich is similar to a cell in terms of the structure
Ther r, not
c
cons two ends of the bridge. The
I
2
C, but the data
stimulus. In fact, formal verification does not depend on any
input stimulus since we explore the entire input space of the
design using mathematical techniques without the need for input
vectors or simulation. This means that if a property is proven
true using formal verification, then there is no sequence of input
vectors you can simulate that would expose a corner-case bug.
Hence, you do not need traditional coverage techniques (such as
line coverage or functional coverage) since the quality of
exploring the input space in formal is complete and exhaustive.
The risk with formal verification is that a proof might have
completed with a set of formal constraints that restricts the input
space to a subset of possible behaviors. For formal verification,
the coverage you should perform ensures that the design is not
over-constrained while performing a proof. Therefore, the extent
of coverage is very different from what coverage-driven
simulation does. Coverage in a formal verification environment
ensures that we do not miss major operations. We demonstrate
this process on our example in Section 3.
Requirement coverage. The other key aspect of coverage you
must consider during verification is requir
referred to as property coverage in formal verification). In a
traditional simulation environment, you cannot automatically
apply any metrics to determine the completeness of the
testbench output checkers with respect to the requirements
defined in the specification (that is, line coverage and functional
coverage metrics do not measure the completeness of testbench
output checkers). Hence, when you create a simulation-based
testplan, it is critical for the design and verification team to
carefully review the requirements identified in the design
specification to ensure that an output checker is created to check
the set of requirements.
In formal verification, you must apply the same process to
ensure that the created
defined in the specification. During this process, there are two
questions about the final property set that you must answer:
1. Have we written enough properties (completeness)?
2. Are our properties connected (when partitioning comp
properties)?
For your design, it is critical for you to review your
formal property set covers everything you intend.
Concerning the question, “Are our properties connected,” take
care when constructing your property set to take
the concept of assume-guarantee (as previously discussed). This
approach ensures that any properties used as assumptions on one
block will be proved on its neighboring block(s)—thus ensuring
the property set is connected and that you can trace a property
associated with the output of the memory controller all the way
through the design back to its inputs.
Achieving high requirement coverage. To ensure
comprehensiveness in developing y
checklist, we recommend the following steps:
1. Review the architectural and micro-architectural
specifications and create a checklist of
2. Review all block output ports in terms of functionality and
determine if you need to add items to your requirements
checklist.
3. Review all block input ports in terms of functionality and
determine if you need to add items to your requirements
checklist.
4. Review all data input ports and understand the life of the
data from the point it enters the block until it exits the
block (co
determine if you need to add items to the requirements
checklist.
Conduct a final requirements checklist review with
appropriate stakeholders (for example, architects,
designers,
Measuring verification progress. The formal verification
lan approach provides a direct means to measure progress
ghout the verification process.
measured by tracking the English checklist of proved
requirements contained within the formal testplan.
3. APPLICATION EXAMPLE
In this example, we demonstrate the concepts i
3.1 Overview AHB-Lite to I
2
C Bridge
"Bridge" is actually a rather broad term that refers to a design
where the transport of data (often between
occurs. In general, data is transferred in one of three for
Direct transfer of data, either as a single-cycle transfer or
as a burst
A fixed-size cell where there is a header, followed
payload, and finally, some frame-checking sequence
A packet, w
but different in terms of size
e are several key components in this bridge; howeve
all omponents apply to all bridges. The first key component
ists of the interfaces on the
second is the datapath flow through the bridge. The third is an
arbiter component (when applicable). Finally, bridges often
have some decoding and arithmetic computation, such as CRC
calculations and checking, ALU, and so forth.
Figure 6 shows an example of a bridge, which is a simple
AMBA AHB-Lite [2] to I
2
C [3] bridge. In our example, the
commands flow one direction from AHB to
flows both ways. For the write direction, data are written into a
FIFO. When the FIFO is full, the AHB signal HREADYout is
deasserted until there is room in the FIFO again. Upon receiving
the write data, as long as there is room in the FIFO, the AHB
bus is free for other devices sharing the AHB to proceed to their
transactions. A read-cycle, however, will hold up the bus until
the data is ready (because a SPLIT transaction is not supported
by the bridge). Therefore, the read transaction has priority over
the write transaction except when there is a coherency issue. For
example, if a read address matches the write address of one of
the entries in the FIFO, the read transaction must wait until that
location is sent before proceeding to the I
2
C bus. Also, the read
transaction does not interrupt an I
2
C write transaction that has
already started.
AMBAŖ
AHB-Lite
Bridge
I
2
C
AHB-Lite
Interface
I
2
C
Interface
2
Figure 6. AMBA AHB-Lite to I C Bridge
3.1.1 C
Although the gate count for this example bridge is not
verification
tplan process
al
prove a block.
considering is a good candidate for formal.
2.
he
3.
that must be
4.
se
5.
vel
6.
y
7.
taining a proof. List the coverage points such that if
3.1.
The
on that we chose to monitor
hallenges in this class of designs
particularly high, it represents two main formal
challenges. First, as with many datapaths involving queues,
there are storage elements that can cause a large state-space.
Second, data-transport paths with queues, and especially
involving a serial bus, have a very high sequential depth.
Consequently, it is going to take a large number of cycles to
complete the proof. Note that although creating a simulation
testbench for this example is fairly trivial, simulation suffers the
same challenges of dealing with the high sequential depth (that
is, a very high number of simulation cycles is required to
achieve reasonable coverage).
3.1.2 Example formal tes
As we previously stated, it is important to create a form
testplan prior to attempting to comprehensively
For our AMBA AHB–Lite to I
2
C bridge example, we followed a
systematic set of steps to create our formal testplan. In this
section, we generalize these steps into what we refer to as the
seven steps of formal testplanning, which apply to a broad class
of today’s designs.
1. Identify good formal candidates. First, determine if the
block you are
(Use the procedure previously described in Section 2.1.)
Create an overview description. Briefly describe the key
characteristics of the bridge (as we did in Section 3.1). T
introduction does not have to be in great detail but should
highlight the major functions of the bridge.
Define interface. Create a table that describes the details
for the block’s interface (internal) signals
referenced (monitored) when creating the set of formal
properties. You will use this list to determine completeness
of the requirement checklist during the review process.
Create the requirements checklist. List, in a natural
language, all high-level requirements for this block (U
the guidelines previously described in Section 2.3,
Achieving high requirement coverage). For our example,
this list can be as high-level as separating the requirements
into the following functionality: AMBA AHB interface,
I
2
C interface, end-to-end requirements, and miscellaneous
requirements, or as detailed as identifying each of the
AHB-Lite requirements, I
2
C requirements, and so forth.
Convert checklist requirements into formal properties. In
this step, convert each of the natural language high-le
requirements into a set of formal properties, using PSL,
SVA, or OVL, and whatever additional modeling is
required to enable you to describe the intended behavior.
Define verification strategy. This section of the formal
testplan is important for listing the strategy used to verif
the block. For example, it is important to verify interface
requirements before end-to-end requirements. In addition,
it might be beneficial to first verify some requirements with
restrictions before running with all possible inputs. For
example, you might decide to set HWRITE to 1 first. Then
proceed to checking the read path by setting HWRITE to 0.
Finally, remove the restrictions to allow both read and
write.
Define coverage goals. This section is important especially
after ob
those points are covered, you will be sure the true proof is
not a false positive due to over-constraining. Some of the
examples of coverage points for this design include FIFO
full, completion of read and write on different HSIZE and
HBURST, and a read with some occupied FIFO locations.
3 Interface description
following table lists the signals defined in the AMBA
AHB-Lite to I
2
C bridge specificati
as part of our high-level requirements model.
Signal Name Description Size
Directio
n
HCLK AHB Clock 1-bit In
HRESETn
t Master Rese
(active low)
1-bit In
HADDR AHB Address 7-bit In
HBURST AHB Burst length 3-bit In
HTRANS
AHB Transaction
Type
2-bit In
HSIZE AHB Transfer Size 3-bit In
HWRITE AHB Write 1-bit In
HSEL AHB Select 1-bit In
HREADYin y AHB HRead 1-bit In
HWDATAH
a
WDATA
AHB Write Dat 32-bit In
HRDATA AHB Read Data 32-bit Out
HRESP AHB Response 2-bit Out
HREADYout UT AHB HREADYO 1-bit Out
SDA I
2
C Data 1-bit In/Out
SCL I
2
C Clock 1-bit Out
i2c_clk_ratio
I
2
C Clock
HCLK to
ratio
2-bit In
W at ul part of our
f in ess because it prov lea cus of
what needs to be checked from a black-box perspective. Thus it
is useful for identifying missing requirements during a formal
testplan review (see Section 2.3, Achieving high requirement
coverage).
3.1.4 Requirements checklist
For our example, there are three main sections of high-level
requirements, the two interfaces and the end-to-end
requirements. (Listing the full set of requirements is beyond the
e find that cre
ormal testplann
ing this interface table is
g proc
a usef
ides a c r fo
scope of this paper.) Our point is to demonstrate the process of
creating a comprehensive natural language list of requirements
derived from the architectural or micro-architectural
specification.
AMBA AHB-Lite interface requirements. In general, we can
partition AMBA AHB-Lite require
ments into two categories:
rements. For our example,
selected, Slave must assert HREADYOUT
itations, we
requirements.
2. art after a start until an end
TL design to
Mis iscellaneous requirements are
s for read/write dependency and are not included in
art after a start until an
;
A_no_start: assert (always i2c_start ->
master requirements and slave requi
we will focus on the subset of slave requirements.
1. Slave must assert HREADYOUT after reset
2. Slave must provide zero wait-state HREADYOUT=1
response to IDLE transaction
3. Slave must provide zero wait-state HREADYOUT=1
response to BUSY transaction
4. When not
5. Slave must drive HREADY low on first cycle of two-cycle
ERROR/SPLIT/RETRY response
6. . . .
I
2
C interface requirements. Because of space lim
2
will not list the comprehensive set of I
C interface
2
However, we list a few I
C requirements below to demonstrate
the process of creating a natural language list of requirements:
1. SDA should remain stable when SCL is high
There should not be another st
occurs in the I
2
C bus.
3. The data between a start and an end should be divisible by
9 (8 bit/transfer + 1-bit ack)
4. . . .
End-to-end requirements. There are two classes of end-to-end
requirements associated with our bridge example. One class
includes data integrity requirements. The second class includes
consistency requirements, which use data as the golden
reference between the formal property and the R
verify that all controls are consistent with the referenced data.
For data integrity verification, there are also two separate paths
that must be considered, one for read and the other for write.
cellaneous requirements. M
the check
this paper due to space limitations.
3.1.5 Formal properties
Using the interface signals identified in Section 3.1.4, and the
set of natural language requirements identified in 3.1.3, create
your set of formal properties. We recommend that you
encapsulate your set formalized requirements into a high-level
requirements model or verification unit that will monitor the
block’s interface signals.
To demonstrate the formal specification process, we convert the
following I
2
C requirement into both PSL and SVA:
There should not be another st
end occurs in the I
2
C bus.
Figure 7 illustrates the PSL coding for our natural language
requirement. In this example,
i2c_start and i2c_end
represent modeling code associated with the assertion,
composed of SCL and SDA.
default clock = HCLK
next(~i2c_start until i2c_end))
abort (~RESETn);
Figure 7. PSL I
2
C assertion
Figure 8 illustrates the SVA coding for our natural language
requirements.
property P_no_start;
@(posedge HCLK) disable
iff (~HRESETn)
rt[*0:$] ##1 i2c_end;
A_no_start:
i2c_start |=> ~i2c_sta
endproperty
assert property (P_no_start);
Figure 8. SVA I C assertion
The process of converting the natural language list of
lly
. Hence, we have only illustrated one example of
this es, in addition to using the
te oday’s assertion languages, you will
ne achines
to for capturing
data in a scoreboard fashion).
3.1.6
For our example fication strategy
n process. Many times you can
ing compositional reasoning
to the
2
requirements into a formal description is genera
straightforward
translation process. At tim
mporal constructs of t
ed additional modeling (possibly as auxiliary state-m
model conceptual states of the environment or
Verification strategy
of a formal testplan, the veri
section contains two main areas. First is to plan proper
partitioning to ensure that we can overcome any verification
bottleneck. Second is to provide a set of restriction definitions
and the recommended verification steps to systematically loosen
these restrictions over the course of the proof. The combination
of restrictions and steps forms the methodology used to
complete the formal proof on the bridge example.
Functional partitioning. It is important to recognize potential
bottlenecks in the verificatio
manage those problems by apply
approaches previously described in Section 2.2. For example,
the write data from the input goes through an internal interface
to the bridge before being sent out through the I
2
C interface.
There is a potential partition point around the internal bridge.
While it might not be necessary to partition the datapath, it is
still important to keep this in mind in case performance becomes
an issue.
For the reverse (read) path, the read request goes directly
I
2
C interface—except when there is a conflict with a pending
write. Therefore, there must be a conflict detection function to
compare the read address against all the write addresses in the
FIFO. To include a detection function as part of a datapath
requirement not only makes the property overly complex to
code, it also adds complexity into the datapath verification. This
is because the conflict-checking is a form of decoding logic that
is rather complex for formal verification. However, it is rather
straight-forward to validate the conflict detection functionality
as a standalone requirement, independent of the property.
Therefore
, it is probably a good idea to separate the datapath
verification and the conflict detection verification (by black-
boxing the conflict-checking logic). The requirement for the
datapath will then use the output of the black-boxed conflict-
checking logic as an input during the analysis (assuming all
possible combinations of error detection during the proof).
Finally, we verify the conflict-checking logic itself, independent
of the datapath property. This way, we partition a difficult
problem (data-transport and complex decoding) into two
relatively simpler problems.
Restriction definition. Formal verification allows you to
uncover corner-cases within the design relative to all valid
sequences of input values. However, there are occasions when
you might want to verify a particular implemented functionality
on a partially completed design by restricting the input
sequences to a specified mode of operation (for example,
explore correct behavior for only read transactions versus read
and write transactions). Even under situations where the RTL is
complete, but the code has not gone through any verification, it
is often more efficient to start the verification process by
independently verifying the main functionality with restrictions
(that is, a special assumption
that restrictions the input space to
l
Verification
for p
ample:
Pro
ents Set 1 with Restriction Definition 1,
nition 3.
a subset of possible behaviors).
For our example, we divide the requirements into three sets:
Set 1: AMBA AHB-Lite and I
2
C interface requirements.
Set 2: End-to-end datapath, read and write.
Set 3: Misc.
And we define the restriction sets as follows:
1. Only unidirectional access (read or write), single cycle
access, no flow-control or errors
2. Only unidirectional access, all burst length, no flow-
control or errors
3. Bi-directional access, all burst length, no flow-contro
or errors.
mended steps
steps. The following lists the recom
roving the AMBA AHB-Lite to I
2
C bridge ex
ve
Requirem
Restriction Definition 2, and Restriction Defi
Pro
Restric Definition 3.
ve Requirements Set 2 with Restriction Definition 1,
tion Definition 2, and Restriction
Prove Requirements Set 1 with no restrictions.
Prove Requirements Set 2 with no restrictions.
Prove s Set 3 with no restrictions. Requirement
If the design is mature, such as a legacy code with minor
changes, or a design that has gone through some simulation, y
ou
m
s o set up
t
g
3
As mentioned previously
s . The coverage
points should focus on ensuring no over-constraining at the
inputs. Therefore there are three sets of coverage points:
d with HREADYOUT
deasserted.
xit
Other tha age
point is c no over-constraint, it is also
im o
through
obvious
A the
RTL is included in at least one requirement. Otherwise, the
codes that are not included are dead-code or additional
Version 2.1 January 2000 by
Philips Semiconductors.
y Specification Language (PSL), IEEE
.
ng
thods
arch
ight decide to skip through the Restriction Sets 2 and 3. It is
till important to go through Restriction Set 1 simply t
he proper environment and constraints, but it is not necessary to
o through the other restriction sets.
.1.7 Coverage
, coverage for formal verification
erves a different purpose than that in simulation
Set 1: Input coverage – Read/write access with different burst
types, sizes, and lengths, an
asserted and
Set 2: Output coverage – Read/write with acknowledgment,
no acknowledgement.
Set 3: Internal main state-machines – I2C state-machines,
AHB state-machines where they can enter and e
each state.
n coverage that determines whether each cover
overed such that there is
portant to ensure that the requirements are complete. We g
the steps in Section 2.3 to ensure that there is no
hole in the coverage provided by all the requirements.
fter all the requirements are proven, we also ensure that all
requirements are needed.
4. CONCLUSION
In this paper, we proposed a formal-based testplanning process,
which includes a systematic set of seven steps. By applying our
process to a real AMBA AHB parallel to Inter IC (I
2
C) serial
bus bridge example, we demonstrated that it is relevant to
today’s ASIC and SoC designs.
5. REFERENCES
[1] A. Aziz, V. Singhal, R. Brayton. ”Verifying interacting finite state
machine: complexity issues.” Technical Report UCB/ERL M93/52.
Electronics Research Lab, Univ. California, Berkeley, CA 94720.
[2] AHB - AMBA Specification (rev 2.0) by ARM, copyright ARM
1999.
[3] I
2
C - The I
2
C-Bus Specification
[4] IEEE Standard for Propert
Std. 1850-2005.
[5] IEEE Standard for SystemVerilog: Unified Hardware Design,
Specification and Verification Language, IEEE Std. 1800-2005.
[6] J. R. Burch, E. M. Clarke, K. L. McMillan, D. L. Dill, L. J. Hwang
“Symbolic model checking: 10
20
states and beyond. Information
and Computation,” 98(2):142-170, 1992.
[7] K. Claessen. “A coverage analysis for safety property lists.”
Unpublished manuscript. April 2003.
http://www.cs.chalmers.se/~koen/Papers/coverage.ps
[8] K. L. McMillan. “A methodology for hardware verification usi
compositional model checking.” Science of Computer
Programming, vol. 37, no. 1-3, May 2000.
[9] D. L. Perry, H. Foster. Applied formal verification. McGraw-Hill,
2005.
[10] J. Richards, D. Phillips. “Creative assertion and constraint me
for formal design verification.” In Proceedings of DVCon, M
2004.
[11] P. Yeung. “The four pillars of assertion-based verification.” In
Proceedings of EuroDesignCon, October 2004.
... Although the method leverages the effectiveness of the formal methods to register files verification, it does not scale to block level or control-based logic modules. Several effective techniques for tackling various challenges in design verification through formal verification are proposed in [13], [14], [15], [16]. To our knowledge, a systematic approach or method to analyze the designs for formal friendliness and executing the formal execution plan does not exists. ...
... Experiments and experience reveal that the designs which fall under the following categories can be considered formal friendly designs [16], [18], [19]. These designs can be considered for end-to-end formal analysis: GPUs, signal processing units, filters and designs implementing complex algorithms. ...
... The structural analysis of the design helps to analyze the complex hot-spots and to identify unnecessary states without any logical importance to the proofs. Blackboxing, adding cut-points, exploiting data symmetry, using memory models or other abstraction techniques can help to resolve the non-convergence of properties [15], [29], [30], [16]. The final step in the flow is to collect coverage from the property runs. ...
... HREADY signal used to develop wait states and slave has more time to sample data . [2], data_burst_read [3], data_burst_read [4], data_burst_read [5], data_burst_read [6], data_burst_read [7] ); `endif InterfaceInstance.burst_write (32'd25,16,0, data_burst); #20; ...
... InterfaceInstance.burst_read (32'd25,16,0, data_burst_read); #20; `ifdef DEBUG $display ("DATA -0 BUSY -BURST16 = %d, %d, %d, %d, %d, %d, %d, %d, %d, %d, %d, %d, %d, %d, %d, %d", data_burst_read[0], data_burst_read [1], data_burst_read [2], data_burst_read [3], data_burst_read [4], data_burst_read [5], data_burst_read [6], data_burst_read [7], data_burst_read [8], data_burst_read [9], data_burst_read [10], data_burst_read [11], data_burst_read [12] the initial functionality defining RTL code written from AHB lite specifications taken from ARM documentation. which is exhaustively created test bench environment simulated by using Questasim tool. ...
Article
Digital integrated circuit(ic design entanglement has been far reaching since the demonstration by kil by in 1958.in this day and age system on chip(soc) design verification accommodate billion , more specifically trillion transistors designs came into existence due to artificial intelligence(ai) designs. The expertise designers tried to ramp up design process by using effective eda tools , still the time wheel move in recursive process. In order to accelerate time wheel of design process specified design methodology needed for every design. The overview of various design methodologies followed in the market now a days. Emulation performance by using veloce platform in bfm mode on ahb lite transmissions. Simulation by using software eda tools is slow going on wheel of design when we go for higher abstraction. Accelerated simulation and emulation using hardware is costly in contrast with software simulation. Prototyping is expedient. Formal verification and intelligent software simulations are frail. The possibility of selection between various hardware engines become ravelled. It develop into perspicuous only amalgamation of engines will assist design verification teams to be triumphant. In this design combination of hardware accelerated simulator as a combination of emulator used to accelerate time wheel by using arm amba ahb lite protocol as a design.
... It can quickly become unmanageable for formal verification. Taking in account the feasibility, scalability and the resource utilization, a design under verification can be seen as friendly or unfriendly to formal methods [2] [4]. On the one hand, formal friendly datapath designs prioritize high concurrency and low sequential depth, along with control logic elements and low complexity data transformations. ...
Preprint
The design of Systems on Chips (SoCs) is becoming more and more complex due to technological advancements. Missed bugs can cause drastic failures in safety-critical environments leading to the endangerment of lives. To overcome these drastic failures, formal property verification (FPV) has been applied in the industry. However, there exist multiple hardware designs where the results of FPV are not conclusive even for long runtimes of model-checking tools. For this reason, the use of High-level Equivalence Checking (HLEC) tools has been proposed in the last few years. However, the procedure for how to use it inside an industrial toolchain has not been defined. For this reason, we proposed an automated methodology based on metamodeling techniques which consist of two main steps. First, an untimed algorithmic description written in C++ is verified in an early stage using generated assertions; the advantage of this step is that the assertions at the software level run in seconds and we can start our analysis with conclusive results about our algorithm before starting to write the RTL (Register Transfer Level) design. Second, this algorithmic description is verified against its sequential design using HLEC and the respective metamodel parameters. The results show that the presented methodology can find bugs early related to the algorithmic description and prepare the setup for the HLEC verification. This helps to reduce the verification efforts to set up the tool and write the properties manually which is always error-prone. The proposed framework can help teams working on datapaths to verify and make decisions in an early stage of the verification flow.
... Formal verification is the use of tools that mathematically analyze the space of possible behaviors of a design, rather than computing results for particular values. It is an exhaustive verification technique that uses mathematical proof methods to verify if the design implementation matches design specifications [21,22]. ...
Chapter
The testing can be divided into two categories Pre-Silicon verification and post-Silicon validation. Pre-Silicon verification deals with simulating and verifying the RTL code while post-Silicon validation deals with silicon validation after fabrication.
... The memory is irrelevant for proving the correctness of ECCs. In fact, memories are not very formal friendly and would increase the state-space, increase complexity as well as increase the proof time for the formal tool [9]. As a result, memory can be safely removed from the analysis space. ...
Conference Paper
Full-text available
Error Detection and Correction Codes (ECCs) are often used in digital designs to protect data integrity. Especially in safety-critical systems such as automotive electronics, ECCs are widely used and the verification of such complex logic becomes more critical considering the ISO 26262 safety standards. Exhaustive verification of ECC using formal methods has been a challenge given the high number of data bits to protect. As an example, for an ECC of 128 data bits with a possibility to detect up to four-bit errors, the combination of bit errors is given by 128 4 + 128 3 + 128 2 + 128 1 ≈ 1.1x10 7. This vast analysis space often leads to bounded proof results. Moreover, the complexity and state-space increase further if the ECC has sequential encoding and decoding stages. To overcome such problems and sign-off the design with confidence within reasonable proof time, we present a pragmatic formal verification approach of complex ECC cores with several complexity reduction techniques and know-how that were learnt during the course of verification. We discuss using the linearity of the syndrome generator as a helper assertion, using the abstract model as glue logic to compare the RTL with the sequential version of the circuit, k-induction-based model checking and using mathematical relations captured as properties to simplify the verification in order to get an unbounded proof result within 24 hours of proof runtime.
... This block is a combinatorial logic that mainly executes case statements. Since the block is combinatorial, it is even more suitable for formal verification [16]. ...
Conference Paper
Full-text available
Nowadays, a majority of System-on-Chips (SoCs) make use of Intellectual Property (IP) in order to shorten development cycles. When such IPs are developed, one of the main focuses lies in the high configurability of the design. This flexibility on the design side introduces the challenge of covering a huge state space of IP configurations on the verification side to ensure the functional correctness under every possible parameter setting. The vast number of possibilities does not allow a brute-force approach, and therefore, only a selected number of settings based on typical and extreme assumptions are usually verified. Especially in automotive applications, which need to follow the ISO 26262 functional safety standard, the requirement of covering all significant variants needs to be fulfilled in any case. State-of-the-Art existing verification techniques such as simulation-based verification and formal verification have challenges such as time-space explosion and state-space explosion respectively and therefore, lack behind in verifying highly configurable digital designs efficiently. This paper is focused on a semi-formal verification methodology for efficient configuration coverage of highly configurable digital designs. The methodology focuses on reduced runtime based on simulative and formal methods that allow high configuration coverage. The paper also presents the results when the developed methodology was applied on a highly configurable microprocessor IP and discusses the gained benefits.
... Formal test planning was first described years ago in [8] and again in [6]. It is the most critical step in the End-to-End checking methodology. ...
Conference Paper
Full-text available
Abstract - The use of formal verification has been steadily increasing thanks to the widespread adoption of automatic formal, formal applications and assertion-based formal checking. However, to continue finding bugs earlier in the design process, we must advance formal verification beyond focusing on a handful of localized functionalities toward completely verifying all block-level design behaviors. An end-to-end formal test bench methodology allows the RTL designer and formal verification engineer to work parallelly to finish design and verification on all functionality formally signed-off as bug-free. Given that today's formal tools cannot close the end-to-end checkers required to verify complex IP blocks, we must rely on methodology to tackle design complexity in a way that allows the formal tool to converge in project time. This paper aims to demystify the end-to-end formal test bench methodology and discusses how we can reduce the com-plexity of the design with functional decomposition and abstraction techniques.
Article
Full-text available
Error correction coding (ECC) methods have been considered essential constituents of data transmission systems. Reed–Solomon (RS) codes are a core ECC technique that have been adopted in numerous applications and standards. Several register‐transfer level (RTL) architectures for RS codecs have been proposed to address specific demands and overcome scalability challenges in speed and area. However, the influence of the main RS codec parameters on the corresponding hardware design has been undervalued by literature. The authors propose an open access intellectual property (IP) of a parameterizable RS codec and explore key aspects of its RTL development using IEEE 802.15.7 standard as illustration. Herein, it is demonstrated that formal verification has the potential to be solely used to attest the correctness of the developed IP for the RS codec configurations specified by IEEE 802.15.7. Furthermore, synthesis reports for the target field‐programmable gate array devices indicate that the proposed IP is able to cope with throughput requirements in IEEE 802.15.7.
Conference Paper
Full-text available
Article
The challenges involved in verifying a complex ASIC chip become daunting for a limited team of design and verification engineers. The recent use of standardized assertion languages and static property checking tools has allowed us to break the normal bounds of functional verification. Assertions provide much-needed visibility into the inner workings of a design, and formal verification is a means of verifying key micro architectural functionality. A hierarchical approach to assertion writing aids verification when using a formal tool. Shared RTL can be verified as a single entity when its desired capabilities are defined and proven with a formal tool. Using these formal verification methods along with standard simulation based methods provides greater confidence as a design progresses toward the tapeout deadline.
Article
A methodology for system-level hardware verification based on compositional model checking is described. This methodology relies on a simple set of proof techniques, and a domain specific strategy for applying them. The goal of this strategy is to reduce the verification of a large system to finite state subgoals that are tractable in both size and number. These subgoals are then discharged by model checking. The proof strategy uses proof techniques for design refinement, temporal case splitting, data-type reduction and the exploitation of symmetry. Uninterpreted functions can be used to abstract operations on data. A proof system supporting this approach generates verification subgoals to be discharged by the SMV symbolic model checker. Application of the methodology is illustrated using an implementation of Tomasulo's algorithm, a packet buffering device and a cache coherence protocol as examples.
Conference Paper
Many different methods have been devised for automatically verifying finite state systems by examining state-graph models of system behavior. These methods all depend on decision procedures that explicitly represent the state space using a list or a table that grows in proportion to the number of states. We describe a general method that represents the state space symbolically instead of explicitly. The generality of our method comes from using a dialect of the Mu-Calculus as the primary specification language. We describe a model checking algorithm for Mu-Calculus formulas that uses Bryant's Binary Decision Diagrams (1986) to represent relations and formulas. We then show how our new Mu-Calculus model checking algorithm can be used to derive efficient decision procedures for CTL model checking, satisfiability of linear-time temporal logic formulas, strong and weak observational equivalence of finite transition systems, and language containment for finite !-automata.
Article
Many different methods have been devised for automatically verifying finite state systems by examining state-graph models of system behavior. These methods all depend on decision procedures that explicitly represent the state space using a list or a table that grows in proportion to the number of states. We describe a general method that represents the state space symbolically instead of explicitly. The generality of our method comes from using a dialect of the Mu-Calculus as the primary specification language. We describe a model checking algorithm for Mu-Calculus formulas that uses Bryant's Binary Decision Diagrams (1986) to represent relations and formulas. We then show how our new Mu-Calculus model checking algorithm can be used to derive efficient decision procedures for CTL model checking, satisfiability of linear-time temporal logic formulas, strong and weak observational equivalence of finite transition systems, and language containment for finite !-automata. The fixed point co...
Conference Paper
We present a coverage analysis that can be used in property-based verification. The analysis helps identifying "forgotten cases"; scenarios where the property list under analysis does not constrain a certain output at a certain point in time. These scenarios can then be manually investigated, possibly leading to new, previously forgotten properties being added. As there often exist cases in which outputs are not supposed to be specified, we also provide means for the specificier to annotate properties in order to control what cases are supposed to be underconstrained. Two main differences with earlier proposed similar analyses exist: The presented analysis is design-independent, and it makes an explicit distinction between intentionally and unintentionally underspecified behavior.
Article
. In this report we carry out a computational complexity analysis of a simple model of concurrency consisting of interacting finite state machines with fairness constraints (IFSMs). This model is based on specification languages used for system specification by actual formal verification tools, and it allows compact representation of complex systems. We categorize the complexity of two problems arising in this model that are of fundamental importance: Formal verification Given a property (expressed as a formula in the logic CTL), deciding if it holds of a system of IFSMs is PSPACEcomplete. Trace universality Given a system of IFSMs, deciding if the set of output traces generated by the system is universal is EXPSPACEcomplete. For a single machines the verification and trace universality are decidable in polytime, and complete for PSPACE respectively. Thus our results demonstrate a tradeoff between the ability to compactly describe systems using concurrency, and the increased comple...
Applied formal verification
  • D L Perry
  • H Foster
D. L. Perry, H. Foster. Applied formal verification. McGraw-Hill, 2005.
Unified Hardware Design, Specification and Verification Language
  • Ieee Standard For
  • Systemverilog
IEEE Standard for SystemVerilog: Unified Hardware Design, Specification and Verification Language, IEEE Std. 1800-2005.