## No full-text available

To read the full-text of this research,

you can request a copy directly from the author.

Any serious attempt at automatic programming of large-scale digital computing machines must provide for some sort of analysis of program structure. Questions concerning order of operations, location and disposition of transfers, identification of subroutines, internal consistency, redundancy and equivalence, all involve a knowledge of the structure of the program under study, and must be handled effectively by any automatic programming system.

To read the full-text of this research,

you can request a copy directly from the author.

... In this paper, we propose the Block-wise Abstract Syntax Tree Splitting (BASTS) to address aforementioned issues. Specifically, we split the code of a method based on the blocks in the dominator tree [23] of the Control Flow Graph of the method [24], instead of statements in the code [13], to capture the information beyond the boundary of one statement. After splitting, for each split code, we generate a split AST, which is modeled by a Tree-LSTM. ...

... We expect that each split code after the splitting would provide concise and local context to locate tree-form syntax dependence among code tokens. In this work, we choose to split the code of a method based on the blocks in the dominator tree [23] of the CFG of the method [24]. Following steps show how to generate split code: 1) Construct CFG to capture control flow relationships among statements. ...

... Note that, although there are various ways to split the code, we choose to leverage the blocks in the dominator tree [23] for two reasons: 1) The consecutive statements inside each block typically represent a piece of business logic which contains richer information than single statement (more analysis is provided in Sec. VI-C.). ...

Automatic code summarization frees software developers from the heavy burden of manual commenting and benefits software development and maintenance. Abstract Syntax Tree (AST), which depicts the source code's syntactic structure, has been incorporated to guide the generation of code summaries. However, existing AST based methods suffer from the difficulty of training and generate inadequate code summaries. In this paper, we present the Block-wise Abstract Syntax Tree Splitting method (BASTS for short), which fully utilizes the rich tree-form syntax structure in ASTs, for improving code summarization. BASTS splits the code of a method based on the blocks in the dominator tree of the Control Flow Graph, and generates a split AST for each code split. Each split AST is then modeled by a Tree-LSTM using a pre-training strategy to capture local non-linear syntax encoding. The learned syntax encoding is combined with code encoding, and fed into Transformer to generate high-quality code summaries. Comprehensive experiments on benchmarks have demonstrated that BASTS significantly outperforms state-of-the-art approaches in terms of various evaluation metrics. To facilitate reproducibility, our implementation is available at https://github.com/XMUDM/BASTS.

... Choi, et al describe a concise and elegant algorithm for finding which nodes must be merge points, using the dominators and the dominance frontier sets of the original full flow graph as seen in figure 1. The concept of dominance was first proposed by Prosser [8]. Although he describes dominance of boxes in a diagram, I paraphrase him here in the vocabulary of control flow graphs: ...

... Observing that Choi, et al's sparse graph construction algorithm requires the full graph's dominators and dominance frontier sets, as defined by [8] and [5] respectively, I explored faster methods for computing this information. Cooper, et al [4] outlined an algorithm (figure 2) for dominators which is simpler and more easily understandable but has a worse upper bound than the well known method of Lengauer and Tarjan [7]. ...

... 3. A regional method operates over scopes larger than a single extended basic block but smaller than a full procedure. The region may be defined by some source-code control structure, like a loop nest [219]; alternatively, it may be a subset of some graphical representation of the code, like a dominator-based method [197,160]. These methods differ from superlocal methods because they can handle points where different control-flow paths merge. ...

... dominator In a control-flow graph, a node p dominates node q if and only if every path from the graph's unique entry node to q passes through p. In this case, p is q's dominator [197,160]. Many nodes can dominate q; the closest such node is termed q's immediate dominator. ...

Since the earliest days of compilation, code quality has been recognized as an important problem [18]. A rich literature has developed around the issue of improving code quality. This paper surveys one part of that literature: code transformations intended to improve the running time of programs on uniprocessor machines. This paper emphasizes transformations intended to improve code quality rather than analysis methods. We describe analytical techniques and specific data-flow problems to the extent that they are necessary to understand the transformations. Other papers provide excellent summaries of the various sub-fields of program analysis. The paper is structured around a simple taxonomy that classifies transformations based on how they change the code. The taxonomy is populated with example transformations drawn from the literature. Each transformation is described at a depth that facilitates broad understanding; detailed references are provided for deeper study of individual transformations. The taxonomy provides the reader with a framework for thinking about code-improving transformations. It also serves as an organizing principle for the paper. Copyright 1998, all rights reserved. You may copy this article for your personal use in Comp 512. Further reproduction or distribution requires written permission from the authors.

... The dominance problem is an excellent example of the need to balance theory with practice. Ever since Lowry and Medlock's O(N 4 ) algorithm appeared in 1969 [23], researchers have steadily improved the time bound for this problem [7,10,17,19,22,26,29]. However, our results suggest that these improvements in asymptotic complexity may not help on realistically-sized examples, and that careful engineering makes the iterative scheme the clear method of choice. ...

... We say box i dominates box j if every path (leading from input to output through the diagram) which passes through box j must also pass through box i. Thus box i dominates box j if box j is subordinate to box i in the program [26]. ...

The problem of finding the dominators in a control-flow graph has a long history in the literature. The original algorithms suffered from a large asymptotic complexity but were easy to understand. Subsequent work improved the time bound, but generally sacrificed both simplicity and ease of implementation. This paper returns to a simple formulation of dominance as a global data-flow problem. Some insights into the nature of dominance lead to an implementation of an O(N2 )a lgorithm that runs faster, in practice, than the classic Lengauer-Tarjan algorithm, which has a timebound of O(E ∗ log(N )). We compare the algorithm to Lengauer-Tarjan because it is the best known and most widely used of the fast algorithms for dominance. Working from the same implementation insights, we also rederive (from earlier work on control dependence by Ferrante, et al.) a method for calculating dominance frontiers that we show is faster than the original algorithm by Cytron, et al. The aim of this paper is not to present a new algorithm, but, rather, to make an argument based on empirical evidence that algorithms with discouraging asymptotic complexities can be faster in practice than those more commonly employed. We show that, in some cases, careful engineering of simple algorithms can overcome theoretical advantages, even when problems grow beyond realistic sizes. Further, we argue that the algorithms presented herein are intuitive and easily implemented, making them excellent teaching tools.

... Loop Analysis (algorithm 3 line 5): Unbounded loops can lead to infinite symbolic explorations [57]. Since we are interested to reduce false positive alarms, we employed a postdominator tree [62] over the static control-flow-graph to identify the loops header in each function. This approach is conservative and allows us to explore more execution paths, which is our main goal. ...

... Loop Analysis (Alg. 3 line 5): Unbounded loops can lead to infinite symbolic explorations [58]. Since we are interested to reduce false positive alarms, we employed a postdominator tree [63] over the static control-flow-graph to identify the loops header in each function. This approach is conservative and allows us to explore more execution paths, which is our main goal. ...

Intel SGX enables memory isolation and static integrity verification of code and data stored in user-space memory regions called enclaves. SGX effectively shields the execution of enclaves from the underlying untrusted OS. Attackers cannot tamper nor examine enclaves' content. However, these properties equally challenge defenders as they are precluded from any provenance analysis to infer intrusions inside SGX enclaves. In this work, we propose SgxMonitor, a novel provenance analysis to monitor and identify anomalous executions of enclave code. To this end, we design a technique to extract contextual runtime information from an enclave and propose a novel model to represent enclaves' intrusions. Our experiments show that not only SgxMonitor incurs an overhead comparable to traditional provenance tools, but it also exhibits macro-benchmarks' overheads and slowdowns that marginally affect real use cases deployment. Our evaluation shows SgxMonitor successfully identifies enclave intrusions carried out by the state of the art attacks while reporting no false positives and negatives during normal enclaves executions, thus supporting the use of SgxMonitor in realistic scenarios.

... For control dependency, postdominators [37] play an important role. Assume that the CFG G has a single End node n e , and that there is a path from each node n in G to n e . ...

Existing proofs of correctness for dependence-based slicing methods are limited either to the slicing of intraprocedural programs [2, 39], or the proof is only applicable to a specific slicing method [4, 41]. We contribute a general proof of correctness for dependence-based slicing methods such as Weiser [50, 51], or Binkley et al. [7, 8], for interprocedural, possibly nonterminating programs. The proof uses well-formed weak and strong control closure relations, which are the interprocedural extensions of the generalised weak/strong control closure provided by Danicic et al. [13], capturing various nonterminating-insensitive and nontermination-sensitive control-dependence relations that have been proposed in the literature. Thus, our proof framework is valid for a whole range of existing control-dependence relations.
We have provided a definition of semantically correct (SC) slice. We prove that SC slices agree with Weiser slicing, that deterministic SC slices preserve termination, and that nondeterministic SC slices preserve the nondeterministic behavior of the original programs.

... The second kind is more complex and relies on the notions of dominance and post-dominance [17]. For two statements S 1 and S 2 , we say that: S 1 dominates S 2 if all paths from the entry point of the function to S 2 pass through S 1 ; S 2 post-dominates S 1 if all paths from S 1 to the return point of the function pass through S 2 . ...

Dataflow test coverage criteria, such as all-defs and all-uses, belong to the most advanced coverage criteria. These criteria are defined by complex artifacts combining variable definitions, uses and program paths. Detection of polluting (i.e. inapplicable, infeasible and equivalent) test objectives for such criteria is a particularly challenging task. This short paper evaluates three detection approaches involving dataflow analysis, value analysis and weakest precondition calculus. We implement and compare these approaches, analyze their detection capacities and propose a methodology for their efficient combination. Initial experiments illustrate the benefits of the proposed approach.

... The second kind is more complex and relies on the notions of dominance and post-dominance [17]. For two statements S 1 and S 2 , we say that: S 1 dominates S 2 if all paths from the entry point of the function to S 2 pass through S 1 ; S 2 post-dominates S 1 if all paths from S 1 to the return point of the function pass through S 2 . ...

International Conference on Integrated Formal Methods 2020-11-16/20, Lugano, Suisse

... Dominance and post-dominance relations [16] used in this criterion state that all paths that go through split must go through its associated merge and, conversely, all paths that go through merge must have gone through its associated split. This criterion ensures that the memory allocations performed by split are eventually freed by merge. ...

Verification of numerical accuracy properties in modern software remains an important and challenging task. This paper describes an original framework combining different solutions for numerical accuracy. First, we extend an existing runtime verification tool called E-ACSL with rational numbers to monitor accuracy properties at runtime. Second, we present an abstract compiler, FLDCompiler, that performs a source-to-source transformation such that the execution of the resulting program, called an abstract execution, is an abstract interpretation of the initial program. Third, we propose an instrumentation library FLDLib that formally propagates accuracy properties along an abstract execution. While each of these solutions has its own interest, we emphasize the benefits of their combination for an industrial setting. Initial experiments show that the proposed technique can efficiently and soundly analyze the accuracy of industrial programs by restricting the analysis on thin numerical scenarios.

... Chaque boucle dans un CFG est identifiée par le bloc de base constituant sa tête Prosser [83] Cette définition est évoquée par Cooper et. al [29], qui présentèrent en 2001 un algorithme efficace pour l'identification de ces propriétés. ...

La recherche d'une borne supérieure au temps d'exécution d'un programme est une partie essentielle du processus de vérification de systèmes temps-réel critiques. Les programmes de tels systèmes ont généralement des temps d'exécution variables et il est difficile, voire impossible, de prédire l'ensemble de ces temps possibles. Au lieu de cela, il est préférable de rechercher une approximation du temps d'exécution pire-cas ou Worst-Case Execution Time (WCET). Une propriété cruciale de cette approximation est qu'elle doit être sûre, c'est-à-dire qu'elle doit être garantie de majorer le WCET. Parce que nous cherchons à prouver que le système en question se termine en un temps raisonnable, une surapproximation est le seul type d'approximation acceptable. La garantie de cette propriété de sûreté ne saurait raisonnablement se faire sans analyse statique, un résultat se basant sur une série de tests ne pouvant être sûr sans un traitement exhaustif des cas d'exécution. De plus, en l'absence de certification du processus de compilation (et de transfert des propriétés vers le binaire), l'extraction de propriétés doit se faire directement sur le code binaire pour garantir leur fiabilité. Toutefois, cette approximation a un coût : un pessimisme - écart entre le WCET estimé et le WCET réel - important entraîne des surcoûts superflus de matériel pour que le système respecte les contraintes temporelles qui lui sont imposées. Il s'agit donc ensuite, tout en maintenant la garantie de sécurité de l'estimation du WCET, d'améliorer sa précision en réduisant cet écart de telle sorte qu'il soit suffisamment faible pour ne pas entraîner des coûts supplémentaires démesurés. Un des principaux facteurs de surestimation est la prise en compte de chemins d'exécution sémantiquement impossibles, dits infaisables, dans le calcul du WCET. Ceci est dû à l'analyse par énumération implicite des chemins ou Implicit Path Enumeration Technique (IPET) qui raisonne sur un surensemble des chemins d'exécution. Lorsque le chemin d'exécution pire-cas ou Worst-Case Execution Path (WCEP), correspondant au WCET estimé, porte sur un chemin infaisable, la précision de cette estimation est négativement affectée. Afin de parer à cette perte de précision, cette thèse propose une technique de détection de chemins infaisables, permettant l'amélioration de la précision des analyses statiques (dont celles pour le WCET) en les informant de l'infaisabilité de certains chemins du programme. Cette information est passée sous la forme de propriétés de flot de données formatées dans un langage d'annotation portable, FFX, permettant la communication des résultats de notre analyse de chemins infaisables vers d'autres analyses. Les méthodes présentées dans cette thèse sont inclues dans le framework OTAWA, développé au sein de l'équipe TRACES à l'IRIT. Elles usent elles-mêmes d'approximations pour représenter les états possibles de la machine en différents points du programme.

... We assume a control flow graph in which appropriate node splitting ensures that no node is both a branch and merge node and any entry to a loop has only one incoming edge. We define our components using dominator and post-dominator relationships [28,32]. ...

Side-channels in software are an increasingly significant threat to the confidentiality of private user information, and the static detection of such vulnerabilities is a key challenge in secure software development. In this paper, we introduce a new technique for scalable detection of side- channels in software. Given a program and a cost model for a side-channel (such as time or memory usage), we decompose the control flow graph of the program into nested branch and loop components, and compositionally assign a symbolic cost expression to each component. Symbolic cost expressions provide an over-approximation of all possible observable cost values that components can generate. Queries to a satisfiability solver on the difference between possible cost values of a component allow us to detect the presence of imbalanced paths (with respect to observable cost) through the control flow graph. When combined with taint analysis that identifies conditional statements that depend on secret information, our technique answers the following question: Does there exist a pair of paths in the program's control flow graph, differing only on branch conditions influenced by the secret, that differ in observable side-channel value by more than some given threshold? Additional optimization queries allow us to identify the minimal number of loop iterations necessary for the above to hold or the maximal cost difference between paths in the graph. We perform symbolic execution based feasibility analyses to eliminate control flow paths that are infeasible. We implemented our techniques in a prototype, and we demonstrate its favourable performance against state-of-the-art tools as well as its effectiveness and scalability on a set of sizable, realistic Java server-client and peer-to-peer applications.

... G = (V; E; s) is a ow graph and v i and v j are two vertices of G. It can be said that v i dominates v j in G if every path from s to v j contains v i [14]. Edge e = (v i ; v j ) is a back edge if every path from s to v i goes through v j ; thus, v j dominates v i [15]. ...

Let G be a weighted digraph and s and t be two vertices of G. The reachability assurance (RA) problem is how to label the edges of G such that every path starting at s finally reaches t and the sum of the weights of the labeled edges, called the RA cost, is minimal. The common approach to the RA problem is pathfinding, in which a path is sought from s to t and then the edges of the path are labeled. This paper introduces a new approach, the marking problem (MP), to the RA problem. Compared to the common pathfinding approach, the proposed MP approach has a lower RA cost. It is shown that the MP is NP-complete, even when the underlying digraph is an unweighted directed acyclic graph (DAG) or a weighted DAG with an out-degree of two. An appropriate heuristic algorithm to solve the MP in polynomial time is provided. To mitigate the RA problem as a serious challenge in this area, application of the MP in software testing is also presented. By evaluating the datasets from various program flow graphs, it is shown that the MP is superior to the pathfinding in the context of test case generation.

... Note that by definition, a node always dominates and post-dominates itself. The notion of dominance was first introduced by Prosser [Prosser 1959] as a unification of both definitions. We can express dominance and post-dominance in tree structures by extending these definitions to the notions of immediate dominance and postdominance [Lowry & Medlock 1969], respectively. ...

... First, we need to identify those control-flow regions in the GCFG that can only be entered through a single block. Luckily, a well-known compiler-technique called dominator analysis [Prosser 1959] provides exactly the desired semantic and fast algorithms for its calculation are available [Lengauer and Tarjan 1979]. ...

Cyber--physical systems typically target a dedicated purpose; their embedded real-time control system, such as an automotive control unit, is designed with a well-defined set of functionalities. On the software side, this results in a large amount of implicit and explicit static knowledge about the system and its behavior already at compile time. Compilers have become increasingly better at extracting and exploiting such static knowledge. For instance, many optimizations have been lifted up to the interprocedural or even to the whole-program level. However, whole-program optimizations generally stop at the application--kernel boundary: control-flow transitions between different threads are not yet analyzed.
In this article, we cross the application--kernel boundary by combining the semantics of a real-time operating system (RTOS) with deterministic fixed-priority scheduling (e.g., OSEK/AUTOSAR, ARINC 653, μITRON, POSIX.4) and the explicit application knowledge to enable system-wide, flow-sensitive compiler optimizations. We present two methods to extract a cross-kernel, control-flow--graph that provides a global view on all possible execution paths of a real-time system. Having this knowledge at hand, we tailor the operating system kernel more closely to the particular application scenario. For the example of a real-world safety-critical control system, we present three possible use cases. (1) Runtime optimizations, by means of specialized system calls for each call site, allow one speed up the kernel execution path by 28% in our benchmark scenario. Furthermore, we target transient hardware fault tolerance with two automated software-based countermeasures: (2) generation of OS state assertions on the expected system behavior, and (3) a system-wide dominator-region based control-flow error detection, both of which leverage significant robustness improvements.

... Postdominators [36] play an important role in slicing. A node n in a CFG is said to postdominate a node n0 if and only if every path from n0 to the stop node goes through n. ...

Program slicing identifies the program parts that may affect certain properties of the program, such as the outcomes of conditions affecting the program flow. Ottenstein's Program Dependence Graph (PDG) based algorithm is the state-of-practice for static slicing today: it is well-suited in applications where many slices are computed, since the cost of building the PDG then can be amortized over the slices. But there are applications that require few slices of a given program, and where computing all the dependencies may be unnecessary. We present a light-weight interprocedural algorithm for backward static slicing where the data dependence analysis is done using a variant of the Strongly Live Variables (SLV) analysis. This allows us to avoid building the Data Dependence Graph, and to slice program statements "on-the-fly'' during the SLV analysis which is potentially faster for computing few slices. Furthermore we use an abstract interpretation-based value analysis to extend our slicing algorithm to slice low-level code, where data dependencies are not evident due to dynamically calculated addresses. Our algorithm computes slices as sets of Control Flow Graph nodes: we show how to adapt existing techniques to generate executable slices that correspond to semantically correct code, where jump statements have been inserted at appropriate places. We have implemented our slicing algorithms, and made an experimental evaluation comparing them with the standard PDG-based algorithm for a number of example programs. We obtain the same accuracy as for PDG-based slicing, sometimes with substantial improvements in performance.

... The relation is a reflexive and transitive relation on program points and, hence, the relation ≃ We can compute the sets T(ℓ) and thus the equivalence classes efficiently by computing the dominator and post dominator trees of the control flow graph [37,41]. For two program points ℓ and ℓ ′ , ℓ ′ is a dominator of ℓ if every path in the control flow graph from the initial block to ℓ passes through ℓ ′ and dually Once the sets T(ℓ) have been computed, we compute the directed acyclic graph that corresponds to the Hasse diagram of the relation on the quotient, i.e., the nodes in this graph are the equivalence classes and the edges are given by the transitive reduction of the relation . ...

Static program analysis is a technique to prove properties of a program without executing its code. Tools that implement static analysis show on an abstraction of a given program that no state in this program violates a user provided property. The abstraction results from an over-approximation of the set of possible executions of the program. If no state that violates the desired property can be reached in the abstraction, static analysis can guarantee that this state will also not be reached in the original program. Even though, static analysis methods are meant to prove the correctness of a program, in practice, they are mostly used to detect errors. Static analysis offers powerful support to detect even sophisticated errors that are hard to find using testing.
The drawback of using static analysis for error detection is that they might emit false warnings. A false warning occurs, when a violation of a property is detected in the abstraction of the program that does not occur in the original program. False warnings are caused by the loss of precision during abstraction. False warnings can be eliminated if the programmer manually refines the abstraction by providing additional information (e.g., by providing invariants).
A second kind of errors that is considered a false warnings is errors that occur for unrealistic input values of a program. Static analysis considers all possible input values of a program. If the analysis is supposed to consider only certain input values, the programmer has to specify this by providing preconditions. Specifying preconditions is a time consuming process. Usually, the precondition has to be refined several times, as the scope of a method is not clear in the early stages of development. False warnings of both kinds are a severe limitation to the usability of static analysis.
This motivates the central research question of this thesis: Is it possible to develop a static analysis that detects a non-empty set of relevant program errors but never reports false warnings (neither false warnings due to abstraction nor false warnings due to weak preconditions)? The central contribution of this thesis is a constructive answer to this challenge.
We introduce the concept of doomed program points. Doomed program points indicate program fragments that inevitably crash on any possible execution of the program. A programmer can, under no circumstances, ignore the presence of a doomed program point. The first contribution of this thesis is that we show that the concept of doomed program points can be formalized. We show that doomed program points occur frequently during the coding phase of a program. This leads to the question if doomed program points can be detected without producing false warnings.
We present a static analysis that detects doomed program points automatically and precisely, i.e. without emitting false warning. The analysis does neither require user provided information in terms of annotations specifying invariants (“assume”) nor in terms of annotations specifying correctness (“assert”). However, the analysis can make use of specification to detect other kinds of errors. The analysis computes a guarantee in terms of a formal proof for the absence of doomed program points on an abstraction of the given program. That is, the power of abstraction is not used, as usually in static analysis, to guarantee the absence of errors, but to guarantee the presence of errors. Annotations can be used to reduce the loss of precision that is caused by the abstraction and thus help to increase the detection rate. The proof computed by the analysis is valid for any input value of the program and is a valid proof for any non-empty subset of input values. That is, a doomed program point can never be eliminated (i.e., ignored) by excluding some non-realistic input values.
We ask if the above mentioned analysis can be realized efficiently. We give a positive answer and present an implementation that can detect doomed program points without producing false warnings and without the need for user interaction. The implementation works without any user provided information about the pre-state of a method or the in- variants of a loop. Yet, it supports specification languages to increase the detection. The implementation is based on existing and established frameworks for static analysis. We present several optimizations for this implementation and show that it is applicable in practice. Doomed pro- gram point detection is an easy-to-use but powerful analysis that can help to make the use of specification languages and static verifiers more common in todays software engineering.

... At the end of the workload, "fall-through" edges are created to ensure non-overlapping basic blocks and routine boundaries are identified. For each routine, the immediate dominators [9], [10] for each node are found. Loops are then identified using the immediate dominator relationships [11]. ...

Comparison of simulation-based performance estimates of program binaries built with different compiler settings or targeted at variants of an instruction set architecture is essential for software/hardware co-design and similar engineering activities. Commonly-used sampling techniques for selecting simulation regions do not ensure that samples from the various binaries being compared represent the same source-level work, leading to biased speedup estimates and difficulty in comparative performance debugging. The task of creating equal-work samples is made difficult by differences between the structure and execution paths across multiple binaries such as variations in libraries, in-lining, and loop-iteration counts. Such complexities are addressed in this work by first applying an existing graph-matching technique to call and loop graphs for multiple binaries for the same source program. Then, a new sequence-alignment algorithm is applied to execution traces from the various binaries, using the graph-matching results to define intervals of equal work. A basic-block profile generated for these matched intervals can then be used for phase-detection and simulation-region selection across all binaries simultaneously. The resulting selected simulation regions match both in number and the work done across multiple binaries. The application of this technique is demonstrated on binaries compiled for different Intel R 64 Architecture instruction-set extensions. Quality metrics for speedup estimation and an example of applying the data for performance debugging are presented.

... A classical definition of dominance in a graph is stated as follows [36]: A node u dominates node v if u belongs to every path from the initial node v 0 of the graph to v. ...

We calculate two iterative, polynomial-time graph algorithms from the literature: a dominance algorithm and an algorithm for the single-source shortest path problem. Both algorithms are calculated directly from the definition of the properties by fixed-point fusion of (1) a least fixed point expressing all finite paths through a directed graph and (2) Galois connections that capture dominance and path length.
The approach illustrates that reasoning in the style of fixed-point calculus extends gracefully to the domain of graph algorithms. We thereby bridge common practice from the school of program calculation with common practice from the school of static program analysis, and build a novel view on iterative graph algorithms as instances of abstract interpretation.

... Marimont [104], Prosser [130], Ianov [86], and Karp [93] introduced directed graphs to represent sequential programs. Since then, several researchers have used graphs to model both sequential and parallel computations for a variety of purposes. ...

Compilers analyze the ir form of the program in order to identify opportunities where the code can be improved and to prove the safety and profitability of transformations that might improve that code. Data-flow analysis is the classic technique for compile-time program analysis. It allows the compiler to reason about the runtime flow of values in the program.
This chapter explores iterative data-flow analysis, based on a simple fixed-point algorithm. From basic data-flow analysis, it builds up to construction of static single-assignment (ssa) form, illustrates the use of ssa form, and introduces interprocedural analysis.

Control dependency (CD) and Static Single Assignment (SSA) form are the basis of many program analyses, transformation, and optimization techniques, and these are implemented and used by modern compilers such as GCC and LLVM. Most state-of-the-art algorithms approximate these computations by using postdominator relations and dominance frontiers (DF) respectively for efficiency reasons which have been used for over three decades. Dominator-based SSA transformation and control dependencies exhibit a non-dual relationship. Recently, it has been shown that DF-based SSA computation is grossly imprecise, and Weak and Strong Control Closure (WCC and SCC) have wider applicability in capturing control dependencies than postdominator-based CD computation. Our main contribution in this article is the proof of duality between the generation of $\phi$ functions and the computation of weakly deciding (WD) vertices which are the most computationally expensive part of SSA program construction and WCC/SCC computation respectively. We have provided a duality theorem and its constructive proof by means of an algorithm that can compute both the $\phi$ functions and the WD vertices seamlessly. We have used this algorithm to compute SSA programs and WCC, and performed experiments on real-world industrial benchmarks. The practical efficiency of our algorithm is (i) almost equal to the best state-of-the-art algorithm in computing WCC, and (ii) closer to (but not as efficient as) the DF-based algorithms in computing SSA programs. Moreover, our algorithm achieves the ultimate precision in computing WCC and SSA programs with respect to the inputs of these algorithms and obtains wider applicability in the WCC computation (handling nonterminating programs)

Control dependency is a fundamental concept in many program analyses, transformation, parallelization, and compiler optimization techniques. An overwhelming number of definitions of control dependency relations are found in the literature that capture various kinds of program control flow structures. Weak and strong control closure (WCC and SCC) relations capture nontermination insensitive and sensitive control dependencies and subsume all previously defined control dependency relations. In this paper, we have shown that static dependency-based program slicing requires the repeated computation of WCC and SCC. The state-of-the-art WCC and SCC algorithm provided by Danicic et al. has the cubic and the quartic worst-case complexity in terms of the size of the control flow graph and is a major obstacle to be used in static program slicing. We have provided a simple yet efficient method to compute the minimal WCC and SCC which has the quadratic and cubic worst-case complexity and proved the correctness of our algorithms. We implemented ours and the state-of-the-art algorithms in the Clang/LLVM compiler framework and run experiments on a number of SPEC CPU 2017 benchmarks. Our WCC method performs a maximum of 23.8 times and on average 10.6 times faster than the state-of-the-art method to compute WCC. The performance curves of our WCC algorithm for practical applications are closer to the NlogN curve in the microsecond scale. Our SCC method performs a maximum of 226.86 times and on average 67.66 times faster than the state-of-the-art method to compute SCC. Evidently, we improve the practical performance of WCC and SCC computation by an order of magnitude.

Control dependency is a fundamental concept in many program analyses, transformation, parallelization, and compiler optimization techniques. An overwhelming number of definitions of control dependency relations are found in the literature that capture various kinds of program control flow structures. Weak and strong control closure (WCC and SCC) relations capture nontermination insensitive and sensitive control dependencies and subsume all previously defined control dependency relations. In this paper, we have shown that static dependency-based program slicing requires the repeated computation of WCC and SCC. The state-of-the-art WCC algorithm provided by Danicic et al. has the cubic worst-case complexity in terms of the size of the control flow graph and is a major obstacle to be used in static program slicing. We have provided a simple yet efficient method to compute the minimal WCC which has the quadratic worst-case complexity and proved the correctness of our algorithms. We implemented ours and the state-of-the-art algorithms in the Clang/LLVM compiler framework and run experiments on a number of SPEC CPU 2017 benchmarks. Our method performs a maximum of 23.8 times and on average 10.6 times faster than the state-of-the-art method. The performance curves of our WCC algorithm for practical applications are closer to the NlogN curve in the microsecond scale. Evidently, we improve the practical performance of WCC computation by an order of magnitude.

Verification of numerical accuracy properties in modern software remains an important and challenging task. One of its difficulties is related to unstable tests, where the execution can take different branches for real and floating-point numbers. This paper presents a new verification technique for numerical properties, named Runtime Abstract Interpretation (RAI), that, given an annotated source code, embeds into it an abstract analyzer in order to analyze the program behavior at runtime. RAI is a hybrid technique combining abstract interpretation and runtime verification that aims at being sound as the former while taking benefit from the concrete run to gain greater precision from the latter when necessary. It solves the problem of unstable tests by surrounding an unstable test by two carefully defined program points, forming a so-called split-merge section, for which it separately analyzes different executions and merges the computed domains at the end of the section. Our implementation of this technique in a toolchain called FLDBox relies on two basic tools, FLDCompiler, that performs a source-to-source transformation of the given program and defines the split-merge sections, and an instrumentation library FLDLib that provides necessary primitives to explore relevant (partial) executions of each section and propagate accuracy properties. Initial experiments show that the proposed technique can efficiently and soundly analyze numerical accuracy for industrial programs on thin numerical scenarios.

The production of “Fail-Safe” software is an elusive goal. And, it is a matter of controversy if such a goal can ever even be reached. Certainly, defects traceable to human error and misunderstanding can never be completely removed. Although, mathematical methods may be used to eliminate some. This chapter argues that barring human errors, and assuming that input and output assertions are true for simple inductively provable subroutines, it should be possible to write fail-safe code for reliable machines provided arithmetic is limited to integer operations.

This chapter describes the software “debugging” process at the higher levels of code organization in terms of the topological properties of a program’s Flowchart. However, traditional flowcharts are not the best presentation of logic flow. Other representations allow efficient enumeration of simple logic loops within the flow. And, this enumeration will be shown to be central to code fault detection and correction.

Boolean matrices are of prime importance in the study of discrete event systems (DES), which allow us to model systems across a variety of applications. The index of convergence (i.e., the number of distinct powers of a Boolean matrix) is a crucial characteristic in that it assesses the transient behavior of the system until reaching a periodic course. In this paper, adopting a graph-theoretic approach, we present bounds for the index of convergence of Boolean matrices for a diverse class of systems, with a certain decomposition. The presented bounds are an extension of the bound on irreducible Boolean matrices, and we provide non-trivial bounds that were unknown for classes of systems. Furthermore, the proposed method is able to determine the bounds in polynomial time. Lastly, we illustrate how the new bounds compare with the previously known bounds and we show their effectiveness in cases such as the benchmark IEEE 5-bus power system.

We investigate the impact of using different algorithmic techniques and data representations in algorithms to calculate the transitive closure of a finite binary relation. These techniques are change monitor, loop fusion, loop tiling and short-circuiting. We explain them and how they are applied in the algorithms. We measured the impact of these techniques on the elapsed time to execute the algorithms, using C++ implementations with two different data representations, and using various data sets. The investigation also covers more basic transitive closure algorithms, and as a result forms a large-scale empirical comparison of such algorithms.

AVR processors are widely used in embedded devices. Hence, it is crucial for the security of such devices that cryptography on AVR processors is implemented securely. Timing-side-channel vulnerabilities and other possibilities for information leakage pose serious dangers to the security of cryptographic implementations. In this article, we propose a framework for verifying that AVR assembly programs are free from such vulnerabilities. In the construction of our framework, we exploit specifics of the 8-bit AVR architecture to make the static analysis of timing behavior reliable. We prove the soundness of our analysis against a formalization of the official AVR instruction-set specification.

Graphs are widely used in data analytics applications in a variety of fields and are rapidly gaining attention in the computational scientific and engineering (CSE) application community. An important application of graphs concerns binary (executable) signature search to address the potential of a suspect binary evading binary signature detection via obfuscation. A control flow graph generated from a binary allows identification of a pattern of system calls, an ordered sequence of which can then be used as signatures in the search. An application proxy, named PathFinder, represents these properties, allowing examination of the performance characteristics of algorithms used in the search. In this work, we describe PathFinder, its signature search algorithm, which is a modified depth-first recursive search wherein adjacent nodes are compared before recursing down its edges for labels, and its general performance and cache characteristics. We highlight some important differences between PathFinder and traditional CSE applications. For example, the L2 cache hit ratio (less than 60%) in PathFinder is observed to be substantially lower than those observed for traditional CSE applications.

Conformance with standard corporate and institutional processes and industry best practices are sought because of regulatory requirements and evidence that best practices lead to improved project performance. Automated workflow engine enabled Industry Foundation Processes (IFP) are introduced in this paper that facilitate process conformance through structural transparency, foundation process inheritance, and automated conformance checking. While IFP processes can be customized to suit particular project or corporate conditions, they need to conform to a standard core structure and thus behavior. This has been achieved through defining specific workflow inheritance rules and developing an automated structural process conformance checking algorithm. The algorithm has been developed based on graph theory fundamentals using a first-order logic language, which compares two workflows and detects the conformance of a customized one with its associated IFP. The developed algorithm has been functionally tested and validated with different structural settings of workflows with a number of critical cases. Its functionality has been demonstrated in this paper with an example of the commonly used process of RFI (Request for Information). A new construct is thus contributed that can help improve process conformance to industry best practices particularly in the architectural, engineering, and construction industry, leading to improved project conformance.

This entirely revised second edition of Engineering a Compiler is full of technical updates and new material covering the latest developments in compiler technology. In this comprehensive text you will learn important techniques for constructing a modern compiler. Leading educators and researchers Keith Cooper and Linda Torczon combine basic principles with pragmatic insights from their experience building state-of-the-art compilers. They will help you fully understand important techniques such as compilation of imperative and object-oriented languages, construction of static single assignment forms, instruction scheduling, and graph-coloring register allocation. In-depth treatment of algorithms and techniques used in the front end of a modern compiler. Focus on code optimization and code generation, the primary areas of recent research and development. Improvements in presentation including conceptual overviews for each chapter, summaries and review questions for sections, and prominent placement of definitions for new terms. Examples drawn from several different programming languages.

Functional test generation and design validation frequently use stochastic methods for vector generation. However, for circuits with narrow paths or random-resistant corner cases, purely random techniques can fail to produce adequate results. Deterministic techniques can aid this process; however, they add significant computational complexity. This paper presents a Register Transfer Level (RTL) abstraction technique to derive relationships between inputs and path activations. The abstractions are built off of various program slices. Using such a variety of abstracted RTL models, we attempt to find patterns in the reduced state and input with their resulting branch activations. These relationships are then applied to guide stimuli generation in the concrete model. Experimental results show that this method allows for fast convergence on hard-to-reach states and achieves a performance increase of up to 9× together with a reduction of test lengths compared to previous hybrid search techniques.

We consider a non-orthodox representation of directed graphs which uses the “disjoint set forest” data structure. We show how such a representation can be used in order to efficiently find the dominator tree. Even though the performance of our algorithm does not improve over the already known algorithms for constructing the dominator tree, the approach is new and it gives place to a highly structured and simple to follow proof of correctness.

We introduce the index lattice of a Boolean matrix and discuss its properties. We obtain the condition of the existence of an order-embedding from the index lattice of the Boolean matrix to a given complete lattice, and answer the question when the index lattice of the Boolean matrix is completely distributive.

The concept and properties of the basic code block are extended to a generalized code region whose control flow graph can be entered at only one node and exited at only one node. Algorithms are given for the determination of all such regions in a program and the associated data flow information. Also, some applications of these regions to global optimization and code motion are discussed.

In the synthesis of switching circuits, a formal representation of the function to be realized by the circuit is first established and simplified as much as possible. Only then is construction of the circuit undertaken. It is argued that an analogous strategy should be followed in the synthesis of digital computer programs: the function to be realized by a program should first be established in a suitable formalism; the resulting formal expression should then be simplified as much as possible; only at this point should translation into the final ``machine'' program be undertaken. In the light of this discussion, the simplification of a certain type of elementary program, containing no branching or internal modification, is considered in detail. It is argued that the analysis of this type of program, whose formalization is called a ``computational chain,'' is a prerequisite to the analysis of more general programs. A system of notation is developed, and rules are given for minimizing the temporary storage requirements associated with a computational chain, for eliminating vacuous and redundant parts, and for forming combinations of chains.

A methodology is described for the automatic design and optimization of program modules. Processes and files are grouped and reorganized in such a way as to produce an optimal design with respect to a specific target machine. Performance criteria for the optimal design is defined in terms of four components: (1) processing time, (2) transport volume, (3) core size, and (4) number and type of I/O units required.

The p-terminal generalization of a two-terminal switching function is a matrix of switching functions representing the conditions under which the terminals are interconnected. The properties of these “switching matrices” are studied, and examples are given to show how they may be employed effectively in the design of switching circuits. Some basic problems are outlined and a bibliography is attached.

An elementary method which yields the partition function of a
two-dimensional Ising model is described. The method is purely
combinatorial and does not involve any of the algebraic apparatus used
in this connection by Onsager and Kaufman.

Scitation is the online home of leading journals and conference proceedings from AIP Publishing and AIP Member Societies

A new method of checking the consistency of precedence matrices is demonstrated. The method is based on the theorem that a precedence matrix is consistent if and only if every principal submatrix has at least one zero row or zero column. Because this method recognizes inconsistencies in their implicit form whereas the conventional method recognized only explicit contradictions, a considerable saving in time and effort can be effected, since the process of making explicit all the implications of a precedence matrix particularly a larger one, is a tedious time-consuming operation.

Matrix methods may be applied to the analysis of experimental data concerning group structure when these data indicate relationships which can be depicted by line diagrams such as sociograms. One may introduce two concepts,n-chain and clique, which have simple relationships to the powers of certain matrices. Using them it is possible to determine the group structure by methods which are both faster and more certain than less systematic methods. This paper describes such a matrix method and applies it to the analysis of practical examples. At several points some unsolved problems in this field are indicated.