To read the full-text of this research, you can request a copy directly from the author.
Abstract
The testing can be divided into two categories Pre-Silicon verification and post-Silicon validation. Pre-Silicon verification deals with simulating and verifying the RTL code while post-Silicon validation deals with silicon validation after fabrication.
SoC Verification is one of the hot issues in VLSI. More than 70 percent of the time is spent on verification. So, there is a need for constructing a reusable and robust verification environment. Universal verification methodology (UVM) is a promising solution to address these needs. This paper presents a survey on the features of UVM. It presents its pros, cons, and opportunities. Moreover, it presents simple steps to verify an IP and build an efficient verification environment. A SoC case study is presented to compare traditional verification with UVM-based verification.
This paper presents one implementation of the Open Control Protocol (OCP) monitoring on the synthesized FPGA design using the implemented library of Synthesizable SystemVerilog Assertions (SSVA). The SSVA library is developed using the layer structure of SystemVerilog assertions. It is used in implementation of the monitors for two profiles of the OCP. SSVA library and OCP monitors are then functionally verified in simulator using the test case of the processor-memory communication. The test case is then synthesized on CHIPit Platinum Edition FPGA. The implemented library and monitors can be used in many commercial and educational projects due to their simplicity and low FPGA area usage.
In this paper, we propose a systematic set of guidelines for creating an effective formal verification testplan, which consists of an English list of comprehensive requirements that capture the desired functionality of the blocks we intend to formally verify. We demonstrate our formal verification testplanning techniques on a real example that involves an AMBA™ AHB parallel to Inter IC (or I 2 C) serial bus bridge.
In this paper, we present a method for generating checker circuits from sequential-extended regular expressions (SEREs). Such sequences form the core of increasingly-used Assertion-Based Verification (ABV) languages. A checker gen- erator capable of transforming assertions into efficient circuits allows the adoption of ABV in hardware emulation. Towards that goal, we introduce the algorithms for sequence fusion and length matching intersection, two SERE operators that are not typically used over regular expressions. We also develop an algorithm for generating failure detection automata, a concept critical to extending regular expressions for ABV, as well as present our efficient symbol encoding. Experiments with complex sequences show that our tool outperforms the best known checker generator.
The advent of new 90 nm/130 nm VLSI technology and SoC design methodologies, has brought an explosive growth in the complexity of modern electronic circuits. As a result, functional verification has become the major bottleneck in any design flow. New methods are required that allow for easier, quicker and more reusable verification. In this paper we propose an automatic verification methodology approach that enables fast, transaction-level, coverage-driven, self-checking and random-constraint functional verification. Our approach uses the systemC verification library (SCV), to synthesize a tool capable of automatically generating testbench templates. A case study from a real MP3 design is used to show the effectiveness of our approach.
This paper presents techniques that enhance automatically generated hardware assertion checkers to facilitate debugging within the assertion-based verification paradigm. Starting with techniques based on dependency graphs, we construct the algorithms for counting and monitoring the activity of checkers, monitoring assertion completion, as well as introduce the concept of assertion threading. These debugging enhancements offer increased traceability and observability within assertion checkers, as well as the improved metrics relating to the coverage of assertion checkers. The proposed techniques have been successfully incorporated into the MBAC checker generator.
Recently, the transaction-level modeling has been widely referred to in system-level design community. However, the transaction-level models (TLMs) are not well defined and the usage of TLMs in the existing design domains, namely modeling, validation, refinement, exploration, and synthesis, is not well coordinated. This paper introduces a TLM taxonomy and compares the benefits of TLMs' use.
This paper presents a technique of re-using DFT logic for system functional and silicon debugging. By re-configuring the existing DFT logic implemented on an ASIC, we are able to 1) test each part of an ASIC in a system environment separately and thus locate manufacturing defects, 2) control and observe any state elements of an ASIC to facilitate system function and silicon debugging, and 3) use structural tests to cover device and their interconnect tests on a board. Therefore, we can achieve debugging and test at both device level and system board level.
The beginnings of the modern-day IC test trace back to the introduction of such fundamental concepts as scan, stuck-at faults, and the D-algorithm. Since then, several subsequent technologies have made significant improvements to the state of the art. Today, IC test has evolved into a multifaceted industry that supports innovation. Scan compression technology has proven to be a powerful antidote to this problem, as it has catalyzed reductions in test data volume and test application time of up to 100 times. This article sketches a brief history of test technology research, tracking the evolution of compression technology that has led to the success of scan compression. It is not our intent to identify specific inventors on a finegrained timeline. Instead, we present the important concepts at a high level, on a coarse timeline. Starting in 1998 and continuing to the present, numerous scan-compression-related inventions have had a major impact on the test landscape. However, this article also is not a survey of the various scan compression methods. Rather, we focus on the evolution of the types of constructs used to create breakthrough solutions.
Test data compression consists of test vector compression on the input side and response, compaction on the output side. This vector compression has been an active area of research. This article summarizes and categories these techniques. The focus is on hardware-based test vector compression techniques for scan architectures. Test vector compression schemes fall broadly into three categories: code-based schemes use data compression codes to encode test cubes; linear-decompression-based schemes decompress the data using only linear operations (that is LFSRs and XOR networks) and broadcast-scan-based schemes rely on broadcasting the same values to multiple scan chains
In formal verification, we verify that a system is correct with respect to a specification. Even when the system is proven to be correct, there is still a question of how complete the specification is, and whether it really covers all the behaviors of the system. The challenge of making the verification process as exhaustive as possible is even more crucial in simulation-based verification, where the infeasible task of checking all input sequences is replaced by checking a test suite consisting of a finite subset of them. It is very important to measure the exhaustiveness of the test suite, and indeed, there has been an extensive research in the simulation-based verification community on coverage metrics, which provide such a measure. It turns out that no single measure can be absolute, leading to the development of numerous coverage metrics whose usage is determined by industrial verification methodologies. On the other hand, prior research of coverage in formal verification has focused solely on state-based coverage. In this paper we adapt the work done on coverage in simulationbased verification to the formal-verification setting in order to obtain new coverage metrics. Thus, for each of the metrics used in simulation-based verification, we present a corresponding metric that is suitable for the setting of formal verification, and describe an algorithmic way to check it.
In this article, we present a novel approach to real-time tracking of full-chip heatmaps for commercial off-the-shelf microprocessors based on machine-learning. The proposed
post-silicon
approach, named
RealMaps
, only uses the existing embedded temperature sensors and workload-independent utilization information, which are available in real-time. Moreover,
RealMaps
does not require any knowledge of the proprietary design details or manufacturing process-specific information of the chip. Consequently, the methods presented in this work can be implemented by either the original chip manufacturer or a third party alike, and is aimed at supplementing, rather than substituting, the temperature data sensed from the existing embedded sensors. The new approach starts with offline acquisition of accurate spatial and temporal heatmaps using an infrared thermal imaging setup while nominal working conditions are maintained on the chip. To build the dynamic thermal model, a temporal-aware long-short-term-memory (LSTM) neutral network is trained with system-level features such as chip frequency, instruction counts, and other high-level performance metrics as inputs. Instead of a pixel-wise heatmap estimation, we perform 2D spatial discrete cosine transformation (DCT) on the heatmaps so that they can be expressed with just a few dominant DCT coefficients. This allows for the model to be built to estimate just the dominant spatial features of the 2D heatmaps, rather than the entire heatmap images, making it significantly more efficient. Experimental results from two commercial chips show that
RealMaps
can estimate the full-chip heatmaps with 0.9
C and 1.2
C root-mean-square-error respectively and take only 0.4ms for each inference which suits well for real-time use. Compared to the state of the art
pre-silicon
approach,
RealMaps
shows similar accuracy, but with much less computational cost.
This paper proposes a new CAD tool that automates the RTL code generation based on the IPXACT standard (develop RTL code using XML files). Many related work generates RTL design using C language. In this work, the generation is based on XML descriptions. The tool is developed using Python. The generated RTL code can be synthesized by the synthesis tool like Design Compiler. Several commercial tools like MATLAB have this capability, but the proposed tool is faster and more configurable.
This chapter will start with definition of an assertion with simple examples, moving on to its advantages as applied to real-life projects, what types of assertions need to be added for a given SoC project, and the methodology components to successfully adopt assertions in your project. How do you know when you have added enough assertions?
We propose a novel semi-automatic methodology to formally verify clock-domain synchronization protocols in industrial-scale hardware designs. To establish the functional correctness of all clock-domain crossings (CDCs) in a system-on-chip (SoC), semi-automatic approaches require non-trivial manual deductive reasoning. In contrast, our approach produces a small sequence of easy queries to the user. The key idea is to use counterexample-guided abstraction refinement (CEGAR) as the algorithmic back-end. The user influences the course of the algorithm based on information extracted from intermediate abstract counterexamples. The workload on the user is small, both in terms of number of queries and the degree of design insight he is asked to provide. With this approach, we formally proved the correctness of every CDC in a recent SoC design from STMicroelectronics comprising over 300,000 registers and seven million gates.
ASIC/SoC verification is one of the most important task in digital design world. A survey tells that 60 to 70 % of total design time is consumed by verification only. Different companies adopt different verification methodology till UVM comes into the picture, which is the best solution to overcome drawbacks of previous methodologies. This paper presents generic verification environment architecture based on UVM and it also presents how different component are connected with each other. As a case study, generic design of Hybrid Memory Cube(HMC) memory controller is presented with some test scenario of verification.
A key problem in postsilicon validation is to identify a small set of traceable signals that are effective for debug during silicon execution. Structural analysis used by traditional signal selection techniques leads to a poor restoration quality. In contrast, simulation-based selection techniques provide superior restorability but incur significant computation overhead. In this paper, we propose an efficient signal selection technique using machine learning to take advantage of simulation-based signal selection while significantly reducing the simulation overhead. The basic idea is to train a machine learning framework with a few simulation runs and utilize its effective prediction capability (instead of expensive simulation) to identify beneficial trace signals. Specifically, our approach uses: 1) bounded mock simulations to generate training vectors for the machine learning technique and 2) a compound search-space exploration approach to identify the most profitable signals. Experimental results indicate that our approach can improve restorability by up to 143.1% (29.2% on average) while maintaining or improving runtime compared with the state-of-the-art signal selection techniques.
The Usage of Field Programmable Gate Arrays (FPGA) and Application Specific Integrated Circuits (ASICs) with complex functionalities such as Digital Signal Processing (DSP) is increasing in onboard space applications. Verification of these complex designs within limited schedule and resources is challenging. In order to ensure reliable functioning of these designs in all possible run time conditions, functional verification is required to be carried out thoroughly. Development of an automated self-checking verification environment or test benches, including generation of bit-accurate golden reference values, is complex and time consuming task even with the use of state-of-the-art Hardware Verification Languages (HVLs) and methodology such as System-Verilog (SV) and Universal Verification Methodology (UVM) respectively. This paper discusses a method for functional verification of DSP based VLSI design using SV and Matlab. The architecture of verification environment and technique for coupling of Matlab with SV based verification environment and generation of bit-accurate golden references, in real time is also discussed in detail, along with two case studies.
TSV-based 3D-IC design can reduce the connection length of stacked ICs and enhance I/O bandwidth of heterogeneous integrated circuits. However the testing of 3D ICs is more complicated than that of 2D ICs. This paper presents an efficient on-chip 3D-IC test framework that can embed the test procedure of TSVs into the memory BIST process. By using the same test patterns generated from the memory BIST mechanism, the faults in both memories and TSVs can be detected simultaneously without extra time to test TSVs. The area overhead for on-chip testing can also be reduced significantly. Experimental results show that the proposed test framework can gain a good performance in test time reduction with very low area overhead penalty for a memory-logic stacked IC.
Formal Verification: An Essential Toolkit for Modern VLSI Design presents practical approaches to utilize Formal Verification for design and validation, with hands-on advice for working engineers integrating these techniques into their work.
Building on a basic knowledge of System Verilog, this book demystifies FV and presents the practical applications that are bringing it into mainstream design and validation processes at Intel and other companies. The text prepares readers to effectively introduce FV in their organization and deploy FV techniques to increase design and validation productivity.
Post-silicon validation is a critical part of integrated circuit design methodology. The primary objective is to detect and eliminate the bugs that have escaped pre-silicon validation phase. One of the key challenges in post-silicon validation is the limited observability of internal signals in manufactured chips. A promising direction to improve observability is to combine trace and scan signals-a small set of trace signals are stored in every cycle, whereas a large set of scan signals are dumped across multiple cycles. Existing techniques are not very effective, since they explore a coarse-grained combination of trace and scan signals. In this paper, we propose a fine-grained architecture that addresses this issue using various scan chains with different dumping periods. We also propose efficient algorithms to select beneficial signals based on this architecture. Our experimental results demonstrate that our approach can improve restoration ratio up to 127% (36% on average) compared with existing trace-only techniques. Our approach also shows up to 125% improvement (61.7% on average) compared with techniques that allow a combination of trace and scan signals with minor (<1%) area and power overhead.
High-quality tests for post-silicon validation should be ready before a silicon device becomes available in order to save time spent on preparing, debugging and fixing tests after the device is available. Test coverage is an important metric for evaluating the quality and readiness of post-silicon tests. We propose an online-capture offline-replay approach to coverage evaluation of post-silicon validation tests with virtual prototypes for estimating silicon device test coverage. We first capture necessary data from a concrete execution of the virtual prototype within a virtual platform under a given test, and then compute the test coverage by efficiently replaying this execution offline on the virtual prototype itself. Our approach provides early feedback on quality of post-silicon validation tests before silicon is ready. To ensure fidelity of early coverage evaluation, our approach have been further extended to support coverage evaluation and conformance checking in the post-silicon stage. We have applied our approach to evaluate a suite of common tests on virtual prototypes of five network adapters. Our approach was able to reliably estimate that this suite achieves high functional coverage on all five silicon devices.
Glitches are the spurious signal transitions, which occur due to unbalanced path delays at the inputs of a gate. Presence of glitches in a digital system increases the number of signal transitions, thereby increasing the dynamic power consumption of the system. Consequently, overall power consumption, a major design criteria of a digital system, is increased. Furthermore, glitches are shown to be a source of side-channel leakage and can be exploited to enhance the success rate of power analysis attacks against cryptographic applications even in presence of side-channel countermeasures. Therefore, elimination of glitches in digital systems implemented on hardware platforms, such as Field Programmable Gate Arrays (FPGAs), is imperative for low power and secure designs. However, a targeted application of glitch elimination techniques requires precise detection of possible glitches. While post place-and route simulation allows the user to detect and display glitches, it cannot take process variations of the FPGA into account. Furthermore, it relies solely on the accuracy of the simulation model. Hence, the implemented circuit might have glitches that were not exposed by simulation. Therefore, only measuring the actual hardware implementation will show all glitches. These measurements are typically made with high quality, fast, and therefore expensive oscilloscopes. In this paper we introduce a methodology to detect glitches in hardware implementations on FPGAs. We designed a circuit, that can be implemented inside the FPGA along with the circuit under test, which not only detects the presence of glitches but also captures the glitch waveform and the relative location of a glitch with respect to the system clock. To enhance the resolution of the captured waveform we over sample the data multiple times with different phase shifts of the sampling clock. Through our proposed method we can reliably detect glitches with a width as small as 2ns on a Spartan 3E FPGA and determine their location relative to the system clock with a resolution of 20ps.
Virtual prototypes are increasingly used in device/driver co-development and co-validation to enable early driver development and reduce product time-to-market. However, drivers developed over virtual prototypes often do not work readily on silicon devices, since silicon devices often do not conform to virtual prototypes. Therefore, it is important to detect the inconsistences between silicon devices and virtual prototypes.
We present an approach to post-silicon conformance checking of a hardware device with its virtual prototype, i.e., a virtual device. The conformance between the silicon and virtual devices is defined over their interface states. This approach symbolically executes the virtual device with the same driver request sequence to the silicon device, and checks if the interface states of the silicon and virtual devices are consistent. Inconsistencies detected indicate potential errors in either the silicon device or the virtual device. We have evaluated our approach on three network adapters and their virtual devices, and found 15 inconsistencies exposing 15 real bugs in total from the silicon and virtual devices. The results demonstrate that our approach is useful and efficient in facilitating device/driver co-validation at the post-silicon stage.
The goal of this chapter is to explain why it is important for you to learn SystemC. If you already know why you are studying SystemC, then you can jump ahead to Chapter 2. If you are learning SystemC for a college course or because your boss says you must, then you may benefit from this chapter. If your boss doesn’t know why you need to spend your time learning SystemC, then you may want to show your boss this chapter.
SystemC is a system design and modeling language. This language evolved to meet a system designer’s requirements for designing and integrating today’s complex electronic systems very quickly while assuring that the final system will meet performance expectations.
Typically, today’s systems contain both application-specific hardware and software. Furthermore, the hardware and software are usually co-developed on a tight schedule with tight real-time performance constraints and stringent requirements for low power. Thorough functional (and architectural) verification is required to avoid expensive and sometimes catastrophic failures in the device. In some cases, these failures result in the demise of the company or organization designing the errant system. The prevailing name for this concurrent and multi-disciplinary approach to the design of complex systems is electronic system-level design or ESL.
The growing importance of post-silicon validation in ensuring functional correctness of high-end designs increases the need for synergy between the pre-silicon verification and post-silicon validation. We propose a unified functional verification methodology for the pre- and post-silicon domains. This methodology is based on a common verification plan and similar languages for test-templates and coverage models. Implementation of the methodology requires a user-directable stimuli generation tool for the post-silicon domain. We analyze the requirements for such a tool and the differences between it and its pre-silicon counterpart. Based on these requirements, we implemented a tool called Threadmill and used it in the verification of the IBM POWER7 processor chip with encouraging results.
Design-time power analysis is one of the most critical tasks conducted by chip architects and circuit designers. While computer-aided power analysis tools can provide power consumption estimates for various circuit blocks, these estimates can substantially deviate from the actual power consumption of working silicon chips. We propose a novel methodology that provides accurate, detailed post-silicon spatial power estimates using the thermal infrared emissions from the backside of silicon die. We theoretically and empirically demonstrate the inherent difficulties in thermal to power inversion. These difficulties arise from measurement errors and from the inherent spatial low-pass filtering associated with heat diffusion. To address these difficulties we propose new techniques from regularization theory to invert temperature to power. Furthermore, we propose new techniques to compute the emissivities and conductances required for any infrared to power inversion method. To verify our results, a programmable circuit of micro heaters is implemented to create any desired power pattern. The thermal emissions of different known injected power patterns are captured using a state-of-the-art infrared camera, and then our characterization techniques are applied to invert the thermal emissions to power. The estimated power patterns are validated against the injected power patterns to demonstrate the accuracy of our methodology.
As the progress of deep submicron technology, embedded memory grows greatly in the System-on-Chip design. An efficient test method with relatively low cost is required for mass production process. Programmable Built-in Self-Test (P-MBIST) solution provides a certain degree of flexibility with reasonable hardware cost, based on the customized controller/processor. In this work, we propose a hardware sharing architecture for P-MBIST design. Through sharing the common address generator and controller, the area overhead of P-MBIST circuit can be significantly reduced. Higher testing speed can be achieved by inserting two pipeline stages. Finally, the proposed P-MBIST circuit can be automatically generated from the user-defined configuration file.
Modern IC designs have reached unparalleled levels of complexity, resulting in more and more bugs discovered after design tape-out However, so far only very few EDA tools for post-silicon debugging have been reported in the literature. In this work we develop a methodology and new algorithms to automate this debugging process. Key innovations in our technique include support for the physical constraints specific to post-silicon debugging and the ability to repair functional errors through subtle modifications of an existing layout. In addition, our proposed post-silicon debugging methodology (FogClear) can repair some electrical errors while preserving functional correctness. Thus, by automating this traditionally manual debugging process, our contributions promise to reduce engineers' debugging effort. As our empirical results show, we can automatically repair more than 70% of our benchmark designs.
Accurate fault diagnosis for large machines is very important for its economic meaning. In essence, fault diagnosis is a pattern classification and recognition problem of judging whether the operation status of the system is normal or not. Support vector machines (SVMs), well motivated theoretically, have been introduced as effective methods for solving classification problems. However, the generalization performance of SVMs is sometimes far from the expected level due to their limitations in practical applications. Ensemble learning of SVMs provides a promising way for such cases. In this paper, a new ensemble learning method based on genetic algorithm is presented. Genetic algorithm is introduced into ensemble learning so as to search for accurate and diverse classifiers to construct a good ensemble. The presented method works on a higher level and is more direct than other search based ensemble learning methods. Different strategies of constructing diverse base classifiers are also studied. Experimental results on a steam turbine fault diagnosis problem show that the presented method can achieve much better generalization performance than other methods including single SVM, Bagging and Adaboost.M1 if the strategies used are appropriate for the practical problem.
This paper describes SoCBase-VL, which is a C/C++ based integrated framework for SoC functional verification. It has a layered architecture which provides easier test-bench description, automatic verification of bus interfaces and seamless testbench migration. This framework does not require verification engineers to learn other verification languages as long as they have sufficient knowledge on both C/C++ and SystemC. We have confirmed its usefulness by applying it to a TFT-LCD controller verification.
Efficient production testing is frequently hampered because (cores in) current complex digital circuit designs require too large test sets, even with powerful ATPG tools that generate compact test sets. Built-in Self-Test approaches often suffer from fault coverage problems, due to random-resistant faults, which can be improved successfully by means of Test Point Insertion (TPI). In this paper, we evaluate the effect of TPI for BIST on the compactness of ATPG generated test sets and it turns out that most often a significant test set size reduction can be obtained. We also propose a novel TPI method, specifically aimed at facilitating compact test generation, based on the `'test counting' technique. Experimental results indicate that the proposed method results in even larger and moreover more consistent reduction of test set sizes
The author first considers the basic logic analyser as the general
purpose design tool. The role of post processing capabilities is
emphasized and the development of a modular fully interactive logic
analysis system is discussed. Stimulus response measurements are dealt
with and developments which support links to the CAE are discussed
Efficient production testing is frequently hampered because (cores in) current complex digital circuit designs require too large test sets, even with powerful ATPG tools that generate compact test sets. Built-In Self-Test approaches often suffer from fault coverage problems, due to randomresistant faults, which can successfully be improved by means of Test Point Insertion (TPI). In this paper, we evaluate the effect of TPI for BIST on the compactness of ATPG generated test sets and it turns out that most often a significant test set size reduction can be obtained. We also propose a novel TPI method, specifically aiming at facilitating compact test generation, based on the 'test counting' technique. Experimental results indicate that the proposed method results in even larger, and moreover, more consistent reduction of test set sizes.
For the foreseeable future, industrial hardware design will continue to use both simulation and model checking in the design verification process. To date, these techniques are applied in isolation using different tools and methodologies, and di erent formulations of the problem. This results in cumulative high cost and little (if any) cross-leverage of the individual advantages of simulation and formal verification. With the goal of e ectively and advantageously exploiting the co-existence of simulation and model checking, we have developed a tool called FoCs ("Formal Checkers"). FoCs, implemented as an independent component of the RuleBase toolset, takes RCTL properties as input and translates them into VHDL programs ("checkers") which are integrated into the simulation environment and monitor simulation on a cycle-by-cycle basis for violations of the property. Checkers, also called Functional Che...
The Role of SystemC in the Evolution of Hardware Design
Jan 2003
Z Navabi
Z. Navabi, "The Role of SystemC in the Evolution of Hardware Design", Worcester Polytechnic
Institute Vanthournout, "SoC design methodology Using SystemC", Coware, 2003.
Clock Domain Crossing Demystified: The Second Generation Solution for CDC Verification
Jan 2008
P Narain
C Cummings
P. Narain and C. Cummings, "Clock Domain Crossing Demystified: The Second Generation
Solution for CDC Verification," Sunburst Design, 2008.
Preventing Chip-Killing Glitches on CDC Paths with Automated Formal Analysis
Jan 2018
Jackie Hsiung
Ashish Hari
Sulabh-Kumar Khare
Jackie Hsiung, Ashish Hari, Sulabh-Kumar Khare, "Preventing Chip-Killing Glitches on CDC
Paths with Automated Formal Analysis", Mediatek, DVCon 2018.
Post-silicon bug detection for variation induced electrical bugs
Jan 2011
273-278
M Gao
P Lisherness
K T Cheng
M. Gao, P. Lisherness, and K. T. Cheng. Post-silicon bug detection for variation induced electrical bugs. In 16th Asia and South Paci_c Design Automation Conference (ASP-DAC 2011),
pages 273-278, Jan 2011. doi: https://doi.org/10.1109/ASPDAC.2011.
Portable Stimulus Driven SystemVerilog/UVM verification environment for the verification of a high-capacity Ethernet communication bridge
Jan 2019
A Vintila
I Tolea
Consulting
A. Vintila, I. Tolea, AMIQ Consulting, "Portable Stimulus Driven SystemVerilog/UVM verification environment for the verification of a high-capacity Ethernet communication bridge,"
Design and Verification Conference US, 2019.
UVM Rapid Adoption: A Practical Subset of UVM
Jan 2015
S Sutherland
T Fitzpatrick
Sutherland, S. and Fitzpatrick, T., 2015. UVM Rapid Adoption: A Practical Subset of UVM.
Kactus2: Open Source IP-XACT tool
D Timo
Hämäläinen
Esko Pekkarinen
Scan-C0068ain-Based Methodology, GA-Based Test Generation
Jan 2016
K S Mohamed
Mohamed, K.S. (2016). New Trends in SoC Verification: UVM, Bug Localization, Scan-C0068ain-Based Methodology, GA-Based Test Generation. In: IP Cores Design from Specifications to Production. Analog Circuits and Signal Processing. Springer, Cham. https://doi.org/
10.1007/978-3-319-22035-2_6.
Synthesis of System Verilog Assertions
Jan 2006
Sayantan Das
Rizimohanty
P P Pallabdasgupta
Chakrabarti
Sayantan Das, RiziMohanty, PallabDasgupta, P.P. Chakrabarti, "Synthesis of System Verilog
Assertions", Design, Automation and Test in Europe, München, Germany, 2006
FoCs: Automatic Generation of Simulation Checkers from Formal Specifications
Jan 2000
538-542
Yael Abarbanel
Ilan Beer
Leonid Glushovsky
Sharon Keidar
Yaron Wolfsthal
Yael Abarbanel, Ilan Beer, Leonid Glushovsky, Sharon Keidar, and Yaron Wolfsthal. FoCs:
Automatic Generation of Simulation Checkers from Formal Specifications. In Proceedings of
the 12th International Conference on Computer Aided Verification (CAV'00), pages 538-542,
2000.