Publications (24)0.48 Total impact


[Show abstract] [Hide abstract]
ABSTRACT: This paper shows equivalence of several versions of applicative similarity and contextual approximation, and hence also of applicative bisimilarity and contextual equivalence, in LR, the deterministic callbyneed lambda calculus with letrec extended by data constructors, caseexpressions and Haskell's seqoperator. LR models an untyped version of the core language of Haskell. The use of bisimilarities simplifies equivalence proofs in calculi and opens a way for more convenient correctness proofs for program transformations. The proof is by a fully abstract and surjective transfer into a callbyname calculus, which is an extension of Abramsky's lazy lambda calculus. In the latter calculus equivalence of our similarities and contextual approximation can be shown by Howe's method. Similarity is transferred back to LR on the basis of an inductively defined similarity. The translation from the callbyneed letrec calculus into the extended callbyname lambda calculus is the composition of two translations. The first translation replaces the callbyneed strategy by a callbyname strategy and its correctness is shown by exploiting infinite trees which emerge by unfolding the letrec expressions. The second translation encodes letrecexpressions by using multifixpoint combinators and its correctness is shown syntactically by comparing reductions of both calculi. A further result of this paper is an isomorphism between the mentioned calculi, which is also an identity on letrecfree expressions. 
24th International Conference on Rewriting Techniques and Applications (RTA 2013); 06/2013

[Show abstract] [Hide abstract]
ABSTRACT: This note provides an example that demonstrates that in nondeterministic callbyneed lambdacalculi extended with cyclic let, extensionality as well as applicative bisimulation in general may not be used as criteria for contextual equivalence w.r.t. may and two different forms of mustconvergence. We also outline how the counterexample can be adapted to other calculi.Information Processing Letters 07/2011; 111:711716. DOI:10.1016/j.ipl.2011.04.011 · 0.48 Impact Factor 
[Show abstract] [Hide abstract]
ABSTRACT: Programmers may use Java diagnostic tools to determine the efficiency of their programs, to find bottlenecks, to study program behavior, or for many other reasons. Some of the common diagnostic tools that we examined are profilers and the Java Virtual Machine (JVM) options that make some internal JVM information available to users. Information produced by these tools varies in degrees of clarity, accuracy, and usefulness. We also found that running some of these tools in conjunction with a program may affect the program's behavior, creating what we refer to as an "observer effect". We examine several tools and discuss their level of usefulness and the extent to which they impact program behavior. Additionally, we discovered program instability, i.e. a tendency of a program to change its behavior when executed with different monitoring tools or multiple times with the same tool. We discuss potential causes for instability based on information obtained via running the HotSpot JVM with an option for logging its internal compilation and optimization process. 
Conference Paper: Simulation in the CallbyNeed LambdaCalculus with letrec.
[Show abstract] [Hide abstract]
ABSTRACT: This paper shows the equivalence of applicative similarity and contextual approximation, and hence also of bisimilarity and con textual equivalence, in the deterministic callbyneed lambda calculus with letrec. Bisimilarity simplifies equivalence proofs in the calculus and opens a way for more convenient correctness proofs for program transfor mations. Although this property may be a natural one to expect, to the best of our knowledge, this paper is the first one providing a proof. The proof technique is to transfer the contextual approximation into Abram sky's lazy lambda calculus by a fully abstract and surjective translation. This also shows that the natural embedding of Abramsky's lazy lambda calculus into the callbyneed lambda calculus with letrec is an isomor phism between the respective termmodels. We show that the equivalence property proven in this paper transfers to a callbyneed letrec calculus developed by Ariola and Felleisen.Proceedings of the 21st International Conference on Rewriting Techniques and Applications, RTA 2010, July 1113, 2010, Edinburgh, Scottland, UK; 01/2010 
Conference Paper: The observer effect of profiling on dynamic Java optimizations.
[Show abstract] [Hide abstract]
ABSTRACT: We show that the bytecode injection approach used in common Java profilers, such as HPROF and JProfiler, disables some program optimizations that are performed when the same program is running without a profiler. This behavior is present in both the client and the server mode of the HotSpot JVM.Companion to the 24th Annual ACM SIGPLAN Conference on ObjectOriented Programming, Systems, Languages, and Applications, OOPSLA 2009, October 2529, 2009, Orlando, Florida, USA; 01/2009 
[Show abstract] [Hide abstract]
ABSTRACT: This note shows that in nondeterministic extended lambda calculi with letrec, the tool of applicative (bi)simulation is in general not usable for contextual equivalence, by giving a counterexample adapted from data flow analysis. It also shown that there is a flaw in a lemma and a theorem concerning finite simulation in a conference paper by the first two authors. 
[Show abstract] [Hide abstract]
ABSTRACT: Java differs from many common programming languages in that Java programs are first compiled to platformindependent bytecode. Java bytecode is run by a program called the Java Virtual Machine (JVM). Because of this approach, Java programs are often optimized dynamically (i.e. at runtime) by the JVM. A justintime compiler (JIT) is a part of the JVM that performs dynamic optimizations. Our research goal is to be able to detect and study dynamic optimizations performed by a JIT using a profiler. A profiler is a programming tool that can track the performance of another program. A challenge for use of profilers for Java is a possible interaction between the profiler and the JVM, since the two programs are running at the same time. We show that profiling a Java program may disable some dynamic optimizations as a side effect in order to record information about those methods. In this paper we examine interactions between a profiler and dynamic optimizations by studying the information collected by the profiler and program runtime measurements without and without profiling. We use Java HotSpot TM JVM by Sun Microsystems as a JVM with an embedded JIT and HPROF profiling tool. The paper details the testing methodology and presents the results and the conclusions. 
[Show abstract] [Hide abstract]
ABSTRACT: The paper presents a calculus of recursivelyscoped records: a twolevel calculus with a traditional callbyname λcalculus at a lower level and unordered collections of labeled λcalculus terms at a higher level. Terms in records may reference each other, possibly in a mutually recursive manner, by means of labels. We define two relations: a rewriting relation that models program transformations and an evaluation relation that defines a smallstep operational semantics of records. Both relations follow a callbyname strategy. We use a special symbol called a black hole to model cyclic dependencies that lead to infinite substitution. Computational soundness is a property of a calculus that connects the rewriting relation and the evaluation relation: it states that any sequence of rewriting steps (in either direction) preserves the meaning of a record as defined by the evaluation relation. The computational soundness property implies that any program transformation that can be represented as a sequence of forward and backward rewriting steps preserves the meaning of a record as defined by the small step operational semantics.In this paper we describe the computational soundness framework and prove computational soundness of the calculus. The proof is based on a novel inductive contextbased argument for meaning preservation of substituting one component into another.Electronic Notes in Theoretical Computer Science 04/2008; 204(204):147162. DOI:10.1016/j.entcs.2008.03.059 
Conference Paper: A Finite Simulation Method in a Nondeterministic CallbyNeed LambdaCalculus with Letrec, Constructors, and Case.
[Show abstract] [Hide abstract]
ABSTRACT: The paper proposes a variation of simulation for checking and proving contextual equivalence in a nondeterministic callbyneed lambdacalculus with constructors, case, seq, and a letrec with cyclic dependencies. It also proposes a novel method to prove its correctness. The calculus' semantics is based on a smallstep rewrite semantics and on mayconvergence. The cyclic nature of letrec bindings, as well as non determinism, makes known approaches to prove that simulation implies contextual preorder, such as Howe's proof technique, inapplicable in this setting. The basic technique for the simulation as well as the correctness proof is called preevaluation, which computes a set of answers for every closed expression. If simulation succeeds in finite computation depth, then it is guaranteed to show contextual preorder of expressions.Rewriting Techniques and Applications, 19th International Conference, RTA 2008, Hagenberg, Austria, July 1517, 2008, Proceedings; 01/2008 
[Show abstract] [Hide abstract]
ABSTRACT: Java 5.0 added classes with a type parameter, also known as generic types, to better support generic programming. Generic types in Java allow programmers to write code that works for different types, with the type safety checks performed at compilation time. Generic classes in Java function by type erasure. Type erasure works by creating a single instance of the generic class, removing all typespecific information from the generic class, and inserting typecasts to guarantee typesafe calls to instances of the generic class. The selection of the type erasure strategy when implementing the Java generics functionality meant that very few changes were necessary to the Java virtual machine. However, type erasure precludes dynamic optimizations that would have been possible if type information was preserved until runtime. Since most of the optimizations in the Java programming language are performed at runtime, Java programs using generic classes are slower than those that use type specialized classes. In this paper we propose and discuss an optimization of Java programs that we call specialization of of Java generic types. The specialization selectively produces separate copies of generic classes for each type used in the program. This reduces the number of time consuming typecasts and dynamic method lookups. The optimization produces up to 15% decrease in program's run time. We discuss conditions under which the specialization can be performed without changing programs' behavior. We present an algorithm that allows one to get substantial program's speedup with relatively few changes to the program. Using a quicksort sorting procedure as a benchmark, we compare the result of such specialization, which we call minimal, with the orginal nonoptimized program and with the version of the program where all generic classes are specialized. 
Conference Paper: Optimizing java programs using generic types.
[Show abstract] [Hide abstract]
ABSTRACT: Our research involves improving performance of programs written in the Java programming language. By selective specialization of generic types, we enable the compiler to eliminate typecasting, and provide type information to remove dynamic method lookup at runtime. An example of this specialization using Quicksort showed performance improvement of about 25%.Companion to the 22nd Annual ACM SIGPLAN Conference on ObjectOriented Programming, Systems, Languages, and Applications, OOPSLA 2007, October 2125, 2007, Montreal, Quebec, Canada; 01/2007 
Article: Specialization of Java Generic Types

Article: A Calculus for Linktime Compilation
[Show abstract] [Hide abstract]
ABSTRACT: We present a module calculus for studying a simple model of linktime compilation. The calculus is stratified into a term calculus, a core module calculus, and a linking calculus. At each level, we show that the calculus enjoys a computational soundness property: if two terms are equivalent in the calculus, then they have the same outcome in a smallstep operational semantics. This implies that any module transformation justified by the calculus is meaning preserving. This result is interesting because recursive module bindings thwart confluence at two levels of our calculus, and prohibit application of the traditional technique for showing computational soundness, which requires confluence. We introduce a new technique, based on properties we call lift and project, that uses a weaker notion of confluence with respect to evaluation to establish computational soundness for our module calculus. We also introduce the weak distributivity property for a transformation T operating on modules D1 and D2 linked by T (D1 D2 ) = T (T (D1 ) T (D2 )). We argue that this property finds promising candidates for linktime optimizations. 
[Show abstract] [Hide abstract]
ABSTRACT: We present a callbyvalue module calculus that serves as a framework for formal reasoning about simple module transformations. The calculus is stratified into three levels: a term calculus, a core module calculus, and a linking calculus. At each level, we define both a calculus reduction relation and a smallstep operational semantics and relate them by a computational soundness property: if two terms are equivalent in the calculus, then they have the same observable outcome in the operational semantics. This result is interesting because recursive module bindings thwart confluence at two levels of our calculus and prohibit application of the traditional technique for showing computational soundness, which requires confluence (in addition to other properties, the most important being standardization). 
[Show abstract] [Hide abstract]
ABSTRACT: This paper focuses on meaning preservation of transformations on a system of mutually dependent program components, which models modular programming. Collections of such components are called records. The term program transformations refers to changes that compilers and similar tools make to a program to improve its performance and/or readability. Such transformations are meaningpreserving if they preserve the behavior of the program. While commonly performed transformations are well tested and widely believed to be meaningpreserving, precise formal proofs of meaning preservation are tedious and rarely done in practice. Optimized programs may have unexpected changes in their behavior due to optimizations. Formal approaches to meaning preservation could lead to more reliable software without sacrificing the program's efficiency. In this paper, we give a formal system for describing program modules and prove some of its important properties. Records represent collections of components which may depend on each other, with possible mutual recursion. Records can be evaluated or reduced (i.e. optimized). Evaluation steps model the way the program would be evaluated. Transformation steps happen only during optimization. In this paper we introduce the necessary formal framework and prove two important properties: confluence of evaluation and that a nonevaluation step preserves the state of a term. Confluence of evaluation means that the result of evaluation of a record does not depend on a specific order of evaluation. The state of a term shows whether the term can be evaluated, and in the case that it can not be evaluated further, what value it has. Confluence of evaluation and preserving the state of a term are necessary fundamental properties for proving meaning preservation of the system of records. 
[Show abstract] [Hide abstract]
ABSTRACT: Traditionally, compilers perform a dual task: they transform a program from the source code (such as C or C++) to machine code, and also optimize the program to make it run faster. Common optimizations include constant propagation and folding, method inlining, dead code elimination, and many others. Java compliers are different from C or C++ compilers: most Java compliers transform Java source code into platformindependent byte code which is later executed by Java Virtual Machine (JVM), usually equipped with a JustInTime compiler (JIT) to compile byte code to native machine code on the fly. In this setup, program optimizations can be performed at two levels: by the compiler (while converting Java code into byte code) and by JVM when byte code is compiled to native code as the program is executed. In this project, we investigate optimizations that are performed by the compiler, javac, and by JVM. We compare our test program efficiency with that of a nonoptimized program in order to detect optimizations being performed on the programs, and to determine at which level they are performed. Our testing programs are specifically designed to detect individual program optimizations. Our research is a work in progress. We present the current results and discuss techniques for detecting optimizations and also for determining whether these optimizations are performed at compile time or run time. 
[Show abstract] [Hide abstract]
ABSTRACT: 1 Abstract Given a regular language, we can find a rational generating function that enumerates it. When considering the converse, we discover that there are rational generating functions that look like they may enumerate a regular language but do not. We consider the problem of finding languages that are enumerated by these rational generating functions. 2 Generating functions Generating functions are a way to represent infinite sequences. Let A(x) be a function with Taylor series expansion ∞ n=0 a n x n . We say that the sequence of Taylor series coefficients, (a n), is encoded by the generating function 1 A(x). We denote the operation of getting a n from A(x) as [x n ]A(x). We do not care about the convergence of the series as we treat the series as a formal power series only. If a generating function is a rational function, then the generating function is called a rational generating function. A classic rational generating function is 1 1 − x − x 2 = 1 + x + 2x 2 + 3x 3 + 5x 4 + 8x 5 + . . . (1) This function generates the Fibonacci numbers because the coefficients of the series are 1, 1, 2, 3, 5, 8, . . .. As we can also see from this example, the series expansion of this generating function has nonnegative coefficients even though the generating function had negative coefficients. 1 Strictly speaking A(x) is an ordinary generating function. The other types of generating functions are not used in this paper. 1 3 Linear recurrences A linear recurrence relation defines a sequence recursively where the terms after a fixed number of seed values are a linear combination of a finite number of previous entries [5]. In particular, we shall be considering linear recurrences of the form a n = c 1 a n−1 +c 2 a n−2 +. . .+c d a n−d (where for all i, c i is some constant) and the seed values are a 0 , . . . , a d−1 . For example, the Fibonacci numbers, 1, 1, 2, 3, 5, . . ., are defined by the linear recurrence relation F 0 = 1, F 1 = 1, F n = F n−1 + F n−2 . Here the seed values are F 0 and F 1 . A property of these linear recurrences is that the sequences they produce are the same as those generated by rational generating functions [5]. Lemma 3.1. The coefficients of the formal power series associated to a rational function f(x) can be realized as a linear recurrence and conversely for any series generated by a linear recurrence there is a rational function that generates that sequence. 
[Show abstract] [Hide abstract]
ABSTRACT: Java generic types allow a programmer to create parameterized data structures and methods. For instance, a generic Stack type may be used for integers in one instance and for strings in another. Java compiler guarantees in this case that integers and strings are not mixed in the same stack. We study runtime efficiency of a certain inheritance pattern related to Java generic types: narrowing of a type bound. This pattern takes place when a generic type allows a more restricted type of elements than its supertype. We examine a slowdown caused by this pattern for some method calls and study the reasons for it. Knowing cases when the slowdown takes place and the reasons for it would allow software developers to make informed choices when using generic types in their programs.
Publication Stats
61  Citations  
0.48  Total Impact Points  
Top Journals
Institutions

2013

GoetheUniversität Frankfurt am Main
 Institut für Informatik
Frankfurt, Hesse, Germany


2007–2011

University of Minnesota Morris
Saint Paul, Minnesota, United States
