Sean Safarpour

University of Toronto, Toronto, Ontario, Canada

Are you Sean Safarpour?

Claim your profile

Publications (31)4.37 Total impact

  • A.C. Ling, S.D. Brown, S. Safarpour, Jianwen Zhu
    [Show abstract] [Hide abstract]
    ABSTRACT: Engineering change orders (ECOs), which are used to apply late-stage specification changes and bug fixes, have become an important part of the field-programmable gate array design flow. ECOs are beneficial since they are applied directly to a placed-and-routed netlist which preserves most of the engineering effort invested previously. Unfortunately, designers often apply ECOs in a manual fashion which may have an unpredictable impact on the design's final correctness and end costs. As a solution, this paper introduces an automated method to tackle the ECO problem. This paper uses a novel resynthesis technique which can automatically update the functionality of a circuit by leveraging the existing logic within the design, thereby removing the inefficient manual effort required by a designer. The technique presented in this paper is robust enough to handle a wide range of changes. Furthermore, the technique can successfully make late-stage functional changes while minimally perturbing the placed-and-routed netlist: something that is necessary for ECOs. Also, this technique does this with a minimal impact on the circuit performance where on average over 90% of the placement and routing wires remain unchanged.
    IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems 02/2011; · 1.09 Impact Factor
  • IEEE Trans. on CAD of Integrated Circuits and Systems. 01/2011; 30:18-30.
  • Source
    Andreas G. Veneris, Brian Keng, Sean Safarpour
    [Show abstract] [Hide abstract]
    ABSTRACT: Computer-aided design tools are continuously improving their scalability and efficiency to mitigate the high cost associated with designing and fabricating modern VLSI systems. A key step in the design process is the root-cause analysis of detected errors. Debugging may take months to close, introduce high cost and uncertainty ultimately jeopardizing the chip release date. This study makes the case for debug automation in each part of the design flow (RTL to silicon) to bridge the gap. Contemporary research, challenges and future directions motivate for the urgent need in automation to relieve the pain from this highly manual task.
    Proceedings of the 16th Asia South Pacific Design Automation Conference, ASP-DAC 2011, Yokohama, Japan, January 25-27, 2011; 01/2011
  • Source
    Brian Keng, Sean Safarpour, Andreas G. Veneris
    [Show abstract] [Hide abstract]
    ABSTRACT: In the last decade, functional verification has become a major bottleneck in the design flow. To relieve this growing burden, assertion-based verification has gained popularity as a means to increase the quality and efficiency of verification. Although robust, the adoption of assertion-based verification poses new challenges to debugging due to presence of errors in the assertions. These unique challenges necessitate a departure from past automated circuit debugging techniques which are shown to be ineffective. In this work, we present a methodology, mutation model and additional techniques to debug errors in SystemVerilog assertions. The methodology uses the failing assertion, counter- example and mutation model to produce alternative properties that are verified against the design. These properties serve as a basis for possible corrections. They also provide insight into the design behavior and the failing assertion. Experimental results show that this process is effective in finding high quality alternative assertions for all empirical instances. I. INTRODUCTION Functional verification and debugging are the largest bottle- necks in the design cycle taking up to 46% of the total devel- opment time (1). To cope with this bottleneck, new methods such as assertion-based verification (ABV) (2), (3) have been adopted by the industry to ease this growing burden. ABV in particular has shown to improve observability, reduce debug time as well as improve overall verification efficiency (2). However even with the adoption of ABV, debugging remains an ongoing challenge taking up to 60% of the total verification time (1).
    Design, Automation and Test in Europe, DATE 2011, Grenoble, France, March 14-18, 2011; 01/2011
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Design debugging has become a resource-intensive bottleneck in modern VLSI CAD flows, consuming as much as 60% of the total verification effort. With typical design sizes exceeding the half-million synthesized gates mark, the growing number of blocks to be examined dramatically slows down the debugging process. The aim of this work is to prune the number of debugging iterations for finding all potential bugs, without affecting the debugging resolution. This is achieved by using structural dominance relationships between circuit components. More specifically, an iterative fixpoint algorithm is presented for finding dominance relationships between multiple-output blocks of the design. These relationships are then leveraged for the early discovery of potential bugs, along with their corrections, resulting in significant debugging speed-ups. Extensive experiments on real industrial designs show that 66% of solutions are discovered early due to dominator implications. This results in consistent performance gains in all cases and a 1.7x overall speed-up for finding all potential bugs, demonstrating the robustness and practicality of the proposed approach.
    2011 IEEE/ACM International Conference on Computer-Aided Design (ICCAD), San Jose, California, USA, November 7-10, 2011; 01/2011
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: As contemporary very large scale integration designs grow in complexity, design debugging has rapidly established itself as one of the largest bottlenecks in the design cycle today. Automated debug solutions such as those based on Boolean satisfiability (SAT) enable engineers to reduce the debug effort by localizing possible error sources in the design. Unfortunately, adaptation of these techniques to industrial designs is still limited by the performance and capacity of the underlying engines. This paper presents a novel formulation of the debugging problem using MaxSAT to improve the performance and applicability of automated debuggers. Our technique not only identifies errors in the design but also indicates when the bug is excited in the error trace. MaxSAT allows for a simpler formulation of the debugging problem, reducing the problem size by 80% compared to a conventional SAT-based technique. Empirical results demonstrate the effectiveness of the proposed formulation as run-time improvements of 4.5 × are observed on average. This paper introduces two performance improvements to further reduce the time required to find all error sources within the design by an order of magnitude.
    IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems 12/2010; · 1.09 Impact Factor
  • Source
    B. Keng, S. Safarpour, A. Veneris
    [Show abstract] [Hide abstract]
    ABSTRACT: Design debugging is a major bottleneck in modern very large scale integration design flows as both the design size and the length of the error trace contribute to its inherent complexity. With typical design blocks exceeding half a million synthesized logic gates and error traces in the thousands of clock cycles, the complexity of the debugging problem poses a great challenge to automated debugging techniques. This paper aims to address this daunting challenge by introducing the bounded model debugging methodology that iteratively analyzes bounded sequences of the error trace. Two techniques are introduced in this methodology to solve this growing problem. The first technique iteratively analyzes bounded subsequences of the error trace of increasing size until the error is found or the entire trace is analyzed. The second technique partitions the error trace into non-overlapping bounded sequences of clock cycles which are each separately analyzed. A discussion of these two techniques is presented and a unified methodology that leverages the strengths of both techniques is developed. Empirical results on real industrial designs show that for large designs and long error traces the proposed methodology can find the actual error in 79% of cases with the first technique and 100% of cases with the second technique. In cases where the methodology is not used only 21% of cases are able to find the actual error. These numbers confirm the benefits of the proposed methodology to allow conventional automated debuggers to handle much larger real-life circuits.
    IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems 12/2010; · 1.09 Impact Factor
  • Source
    IEEE Trans. on CAD of Integrated Circuits and Systems. 01/2010; 29:1804-1817.
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Silicon debug poses a unique challenge to the engineer because of the limited access to internal signals of the chip. Embedded hardware such as trace buffers helps overcome this challenge by acquiring data in real time. However, trace buffers only provide access to a limited subset of pre-selected signals. In order to effectively debug, it is essential to configure the trace-buffer to trace the relevant signals selected from the pre-defined set. This can be a labor-intensive and time-consuming process. This paper introduces a set of techniques to automate the configuring process for trace buffer-based hardware. First, the proposed approach utilizes UNSAT cores to identify signals that can provide valuable information for localizing the error. Next, it finds alternatives for signals not part of the traceable set so that it can imply the corresponding values. Integrating the proposed techniques with a debugging methodology, experiments show that the methodology can reduce 30% of potential suspects with as low as 8% of registers traced, demonstrating the effectiveness of the proposed procedures.
    11th International Symposium on Quality of Electronic Design (ISQED 2010), 22-24 March 2010, San Jose, CA, USA; 01/2010
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Managing long verification error traces is one of the key challenges of automated debugging engines. Today, debuggers rely on the iterative logic array to model sequential behavior which drastically limits their application. This work presents Bounded Model Debugging, an iterative, systematic and practical methodology to allow debuggers to tackle larger problems than previously possible. Based on the empirical observation that errors are excited in temporal proximity of the observed failures, we present a framework that improves performance by up to two orders of magnitude and solve 2.7x more problems than a conventional debugger.
    Proceedings of the 15th Asia South Pacific Design Automation Conference, ASP-DAC 2010, Taipei, Taiwan, January 18-21, 2010; 01/2010
  • Source
    Brian Keng, Andreas G. Veneris, Sean Safarpour
    [Show abstract] [Hide abstract]
    ABSTRACT: Functional verification is becoming a major bottleneck in modern VLSI design flows. To manage this growing problem, assertion-based verification has been adopted as one of the key technologies to increase the quality and efficiency of verification. However, this technology also poses new challenges in the form of debugging and correcting errors in the assertions. In this work, we present a framework for correcting and debugging Property Specification Language assertions. The methodology uses the failing assertion, counter-example and mutation model to produce alternative properties that are verified against the design. Each one of these properties serve as a basis for possible corrections. They also provide insight into the design behavior and the failing assertion that can be used for debugging. Preliminary experimental results show that this process is effective in finding alternative assertions for all empirical instances.
    11th International Workshop on Microprocessor Test and Verification, MTV 2010, Austin, TX, USA, December 13-15, 2010; 01/2010
  • Source
    S. Safarpour, A. Veneris
    [Show abstract] [Hide abstract]
    ABSTRACT: Design debugging is one of the major remaining manual processes in the semiconductor design cycle. Despite recent advances in the area of automated design debugging, more effort is required to cope with the size and complexity of today's designs. This paper introduces an abstraction and refinement methodology to enable current debuggers to operate on designs that are orders of magnitude larger than otherwise possible. Two abstraction techniques are developed with the goals of improving debugger performance for different circuit structures: State abstraction is aimed at reducing the problem size for circuits consisting purely of primitive gates, while function abstraction focuses on designs that also contain modular and hierarchical information. In both methods, after an initial abstracted model is created, the problem can be solved by an existing automated debugger. If an error site is abstracted, refinement is necessary to reintroduce some of the abstracted components back into the design. This paper also presents the underlying theory to guarantee correctness and completeness of a debugging tool that operates using the proposed methodology. Empirical results demonstrate improvements in run time and memory capacity of two orders of magnitude over a state-of-the-art debugger on a wide range of benchmark and industrial designs.
    IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems 11/2009; · 1.09 Impact Factor
  • Source
    Sean Safarpour, Andreas G. Veneris
    [Show abstract] [Hide abstract]
    ABSTRACT: Design debugging is a manual and time consuming task which takes as much as 60% of the verification effort. To alleviate the debugging pain automated debuggers must tackle industrial problems by increasing their capacity and improving their performance. This work introduces an abstraction and refinement methodology for debugging that leverages the high level information inherent to RTL designs. Function abstraction uses the modular nature of designs to simplify the debugging problem. If required, refinement re-introduces the necessary circuitry back into the design in order to find all error locations. The abstraction and refinement process is applied throughout the design's hierarchy allowing for a divide and conquer methodology. The proposed technique is shown to reduce the memory requirement by as much as 27 x and reduce the run-time by two orders of magnitude over a conventional debugger.
    IEEE International High Level Design Validation and Test Workshop, HLDVT 2009, San Francisco, CA, USA, 4-6 November 2009; 01/2009
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: ABSTRACT Design debug remains one of the major bottlenecks in the VLSI design cycle today. Existing automated solutions strive to aid engineers in reducing the debug eort,by identifying possible error sources in the design. Unfortunately, these techniques do not provide any information regarding the time at which the bug is active during an error trace or counter-example. This work introduces an automated,de- bug technique that provides the user with both spatial and temporal information about the source of error. The pro- posed method,is based on a Partial MaxSAT formulation which models errors at the CNF clause level instead of the traditional gate or module level. Thus, error sites are iden- tified based on erroneous implications that correspond to locations both in the design and in the error trace. Ex- periments demonstrate that we can provide this additional information at no extra cost in run time and,are able to prune about 61% of all simulation time frames from the de- bugging process. When compared,to a trivial formulation we observe a performance improvement,of up to two orders of magnitude,and 5◊ on average when,using the proposed formulation. Categories and Subject Descriptors: J.6 [Computer- Aided Engineering]: Computer-aided design (CAD) General Terms: Algorithms, Verification Keywords: Design Debugging, Maximum Satisfiability
    Proceedings of the 19th ACM Great Lakes Symposium on VLSI 2009, Boston Area, MA, USA, May 10-12 2009; 01/2009
  • [Show abstract] [Hide abstract]
    ABSTRACT: During the FPGA design flow, engineering change orders (ECOs) have become an essential methodology to apply late-stage specification changes and bug fixes. ECOs are beneficial since they are applied directly to a place-and-routed netlist which preserves most of the engineering effort invested previously. Unfortunately, designers often apply ECOs in a manual fashion which has an unpredictable impact on the design's final correctness and end costs. As a solution, we introduce an automated method to tackle the ECO problem. Specifically, we introduce a resynthesis technique which can automatically update the functionality of a circuit by leveraging the existing logic within the design; thereby removing the inefficient manual effort required by a designer. Our technique is robust enough to handle a wide range of changes. Furthermore, our technique can successfully make late-stage functional changes while minimally perturbing the place-and-routed netlist: something that is necessary for ECOs. When applied to several benchmarks on Altera's Stratix architecture, we show that our approach can automatically apply ECOs in over 80% of the cases presented. Furthermore, our technique does this with a minimal impact to the circuit performance where on average over 90% of the placement and routing wires remain unchanged.
    Proceedings of the ACM/SIGDA 17th International Symposium on Field Programmable Gate Arrays, FPGA 2009, Monterey, California, USA, February 22-24, 2009; 01/2009
  • Source
    Yibin Chen, Sean Safarpour, Andreas Veneris
    [Show abstract] [Hide abstract]
    ABSTRACT: Debugging design errors is a challenging manual task which requires the analysis of long simulation traces. Trace compaction techniques help engineers analyze the cause of the problem by reducing the length of the trace. This work presents an optimal error trace compaction technique based on incremen-tal SAT. The approach builds a SAT instance from the Iterative Logic Array representation of the circuit and performs a binary search to find the minimum trace length. Since failing properties in the original trace must be maintained in the compacted trace, we enrich our formulation with constraints to guarantee property preservation. Extensive experiments show the effectiveness out SAT based approach as it preserves failing properties with little overhead to the SAT problem while demonstrating on average an order of magnitude in performance improvement.
    Midwest Symposium on Circuits and Systems 01/2009;
  • Source
    Andreas G. Veneris, Sean Safarpour
    [Show abstract] [Hide abstract]
    ABSTRACT: Semiconductor design companies are in a continuous search for design tools that address the ever increasing chip design complexity coupled with strict time-to-market schedules and budgetary constraints. A fundamental aspect of the design process that remains primitive is that of debugging. It takes months to close, it introduces costs and it may jeopardize the release date of the chip. This paper reviews the debugging problem and the research behind it over the past 20 years. The case for automated RTL debug tools and methodologies is also made to help ease the manual burden and complement current industrial verification practices.
    Proceedings of the 46th Design Automation Conference, DAC 2009, San Francisco, CA, USA, July 26-31, 2009; 01/2009
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: The dramatic performance improvements of SAT solvers over the past decade have increased their deployment in hardware verification applications. Many problems that were previously too large and complex for SAT techniques can now be handled in an efficient manner. One such problem is reachability analysis, whose instances are found throughout verification applications such as unbounded model checking and trace reduction. In circuit- based reachability analysis important circuit information is often lost during the circuit- to-SAT translation process. Observability Don't Cares (ODCs) are an example of such information that can potentially help achieve better and faster results for the SAT solver. This work proposes to use the ODCs to improve the quality and performance of SAT- based reachability analysis frameworks. Since ODCs represent variables whose values do not affect the outcome of a problem, it is possible to satisfy a problem with fewer assigned variables. This in turn leads to more compact solutions and thus fewer solutions to cover the entire solution space. Specifically, this work presents an efficient way to identify ODCs, proves the correctness of leaving ODC variables unassigned, and develops a reachability analysis platform that benefits greatly from the ODCs. The advantages of using ODCs in reachability analysis is demonstrated through extensive experiments on unbounded model checking and trace reduction applications.
    JSAT. 01/2008; 5:1-25.
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: In today's SoC design cycles, debugging is one of the most time consuming manual tasks. CAD solutions strive to reduce the inefficiency of debugging by identifying error sources in designs automatically. Unfortunately, the capacity and performance of such automated techniques must be considerably extended for industrial applicability. This work aims to improve the performance of current state-of-the-art debugging techniques, thus making them more practical. More specifically, this work proposes a novel design debugging formulation based on maximum satisfiability (max-sat) and approximate max-sat. The developed technique can quickly discard many potential error sources in designs, thus drastically reducing the size of the problem passed to an existing debugger. The max-sat formulation is used as a pre-processing step to construct a highly optimized debugging framework. Empirical results demonstrate the effectiveness of the proposed framework as run-time improvements of orders of magnitude are consistently realized over a state-of-the-art debugger.
    Formal Methods in Computer Aided Design; 12/2007
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Many CAD for VLSI techniques use time-frame expansion, also known as the iterative logic array representation, to model the sequential behavior of a system. Replicating industrial-size designs for many time-frames may impose impractically excessive memory requirements. This work proposes a performance-driven, succinct and parametrizable quantified Boolean formula (QBF) satisfiability encoding and its hardware implementation for modeling sequential circuit behavior. This encoding is then applied to three notable CAD problems, namely bounded model checking (BMC), sequential test generation and design debugging. Extensive experiments on industrial circuits confirm outstanding run-time and memory gains compared to state-of-the-art techniques, promoting the use of QBF in CAD for VLSI.
    2007 International Conference on Computer-Aided Design (ICCAD'07), November 5-8, 2007, San Jose, CA, USA; 01/2007