Elena Machkasova

Goethe-Universität Frankfurt am Main, Frankfurt, Hesse, Germany

Are you Elena Machkasova?

Claim your profile

Publications (23)0.49 Total impact

  • Source
    08/2013;
  • Source
    08/2013;
  • Source
    24th International Conference on Rewriting Techniques and Applications (RTA 2013); 06/2013
  • Source
    Isaac Sjoblom, Tim S Snyder, Elena Machkasova
    [Show abstract] [Hide abstract]
    ABSTRACT: Programmers may use Java diagnostic tools to determine the efficiency of their programs, to find bottlenecks, to study program behavior, or for many other reasons. Some of the com-mon diagnostic tools that we examined are profilers and the Java Virtual Machine (JVM) options that make some internal JVM information available to users. Information produced by these tools varies in degrees of clarity, accuracy, and usefulness. We also found that running some of these tools in conjunction with a program may affect the program's be-havior, creating what we refer to as an "observer effect". We examine several tools and discuss their level of usefulness and the extent to which they impact program behavior. Additionally, we discovered program instability, i.e. a tendency of a program to change its behavior when executed with different monitoring tools or multiple times with the same tool. We discuss potential causes for instability based on information obtained via run-ning the HotSpot JVM with an option for logging its internal compilation and optimization process.
    01/2011;
  • [Show abstract] [Hide abstract]
    ABSTRACT: This note provides an example that demonstrates that in non-deterministic call-by-need lambda-calculi extended with cyclic let, extensionality as well as applicative bisimulation in general may not be used as criteria for contextual equivalence w.r.t. may- and two different forms of must-convergence. We also outline how the counterexample can be adapted to other calculi.
    Information Processing Letters 01/2011; 111:711-716. · 0.49 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: This paper shows the equivalence of applicative similarity and contextual approximation, and hence also of bisimilarity and con- textual equivalence, in the deterministic call-by-need lambda calculus with letrec. Bisimilarity simplifies equivalence proofs in the calculus and opens a way for more convenient correctness proofs for program transfor- mations. Although this property may be a natural one to expect, to the best of our knowledge, this paper is the first one providing a proof. The proof technique is to transfer the contextual approximation into Abram- sky's lazy lambda calculus by a fully abstract and surjective translation. This also shows that the natural embedding of Abramsky's lazy lambda calculus into the call-by-need lambda calculus with letrec is an isomor- phism between the respective term-models. We show that the equivalence property proven in this paper transfers to a call-by-need letrec calculus developed by Ariola and Felleisen.
    Proceedings of the 21st International Conference on Rewriting Techniques and Applications, RTA 2010, July 11-13, 2010, Edinburgh, Scottland, UK; 01/2010
  • Source
    Elena Machkasova, Kevin Arhelger, Fernando Trinciante
    [Show abstract] [Hide abstract]
    ABSTRACT: We show that the bytecode injection approach used in common Java profilers, such as HPROF and JProfiler, disables some program optimizations that are performed when the same program is running without a profiler. This behavior is present in both the client and the server mode of the HotSpot JVM.
    Companion to the 24th Annual ACM SIGPLAN Conference on Object-Oriented Programming, Systems, Languages, and Applications, OOPSLA 2009, October 25-29, 2009, Orlando, Florida, USA; 01/2009
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: This note shows that in non-deterministic extended lambda- calculi with letrec, the tool of applicative (bi)simulation is in general not usable for contextual equivalence, by giving a counterexample adapted from data flow analysis. It also shown that there is a flaw in a lemma and a theorem concerning finite simulation in a conference paper by the first two authors.
    01/2009;
  • Source
    Kevin Arhelger, Fernando Trinciante, Elena Machkasova
    [Show abstract] [Hide abstract]
    ABSTRACT: Java differs from many common programming languages in that Java programs are first compiled to platform-independent bytecode. Java bytecode is run by a program called the Java Virtual Machine (JVM). Because of this approach, Java programs are often optimized dynamically (i.e. at run-time) by the JVM. A just-in-time compiler (JIT) is a part of the JVM that performs dynamic optimizations. Our research goal is to be able to detect and study dynamic optimizations performed by a JIT using a profiler. A profiler is a programming tool that can track the performance of another program. A challenge for use of profilers for Java is a possible interaction between the profiler and the JVM, since the two programs are running at the same time. We show that profiling a Java program may disable some dynamic optimizations as a side effect in order to record information about those methods. In this paper we examine interactions between a profiler and dynamic optimizations by studying the information collected by the profiler and program run-time measurements without and without profiling. We use Java HotSpot TM JVM by Sun Microsystems as a JVM with an embedded JIT and HPROF profiling tool. The paper details the testing methodol-ogy and presents the results and the conclusions.
    01/2009;
  • Source
    Elena Machkasova
    [Show abstract] [Hide abstract]
    ABSTRACT: The paper presents a calculus of recursively-scoped records: a two-level calculus with a traditional call-by-name λ-calculus at a lower level and unordered collections of labeled λ-calculus terms at a higher level. Terms in records may reference each other, possibly in a mutually recursive manner, by means of labels. We define two relations: a rewriting relation that models program transformations and an evaluation relation that defines a small-step operational semantics of records. Both relations follow a call-by-name strategy. We use a special symbol called a black hole to model cyclic dependencies that lead to infinite substitution. Computational soundness is a property of a calculus that connects the rewriting relation and the evaluation relation: it states that any sequence of rewriting steps (in either direction) preserves the meaning of a record as defined by the evaluation relation. The computational soundness property implies that any program transformation that can be represented as a sequence of forward and backward rewriting steps preserves the meaning of a record as defined by the small step operational semantics.In this paper we describe the computational soundness framework and prove computational soundness of the calculus. The proof is based on a novel inductive context-based argument for meaning preservation of substituting one component into another.
    Electronic Notes in Theoretical Computer Science. 01/2008;
  • Source
    Manfred Schmidt-Schauß, Elena Machkasova
    [Show abstract] [Hide abstract]
    ABSTRACT: The paper proposes a variation of simulation for checking and proving contextual equivalence in a non-deterministic call-by-need lambda-calculus with constructors, case, seq, and a letrec with cyclic dependencies. It also proposes a novel method to prove its correctness. The calculus' semantics is based on a small-step rewrite semantics and on may-convergence. The cyclic nature of letrec bindings, as well as non- determinism, makes known approaches to prove that simulation implies contextual preorder, such as Howe's proof technique, inapplicable in this setting. The basic technique for the simulation as well as the correctness proof is called pre-evaluation, which computes a set of answers for every closed expression. If simulation succeeds in finite computation depth, then it is guaranteed to show contextual preorder of expressions.
    Rewriting Techniques and Applications, 19th International Conference, RTA 2008, Hagenberg, Austria, July 15-17, 2008, Proceedings; 01/2008
  • Source
    Daniel Selifonov, Nathan Dahlberg, Elena Machkasova
    [Show abstract] [Hide abstract]
    ABSTRACT: Java 5.0 added classes with a type parameter, also known as generic types, to better support generic programming. Generic types in Java allow programmers to write code that works for different types, with the type safety checks performed at compilation time. Generic classes in Java function by type erasure. Type erasure works by creating a single instance of the generic class, removing all type-specific information from the generic class, and inserting typecasts to guarantee type-safe calls to instances of the generic class. The selection of the type erasure strategy when implementing the Java generics functional-ity meant that very few changes were necessary to the Java virtual machine. However, type erasure precludes dynamic optimizations that would have been possible if type information was preserved until run-time. Since most of the optimizations in the Java programming language are performed at run-time, Java programs using generic classes are slower than those that use type specialized classes. In this paper we propose and discuss an optimization of Java programs that we call special-ization of of Java generic types. The specialization selectively produces separate copies of generic classes for each type used in the program. This reduces the number of time consuming typecasts and dynamic method lookups. The optimization produces up to 15% decrease in program's run time. We discuss conditions under which the specialization can be performed without changing programs' behavior. We present an algorithm that allows one to get substantial program's speedup with rela-tively few changes to the program. Using a quicksort sorting procedure as a benchmark, we compare the result of such specialization, which we call minimal, with the orginal non-optimized program and with the version of the program where all generic classes are specialized.
    04/2007;
  • [Show abstract] [Hide abstract]
    ABSTRACT: Our research involves improving performance of programs written in the Java programming language. By selective specialization of generic types, we enable the compiler to eliminate typecasting, and provide type information to remove dynamic method lookup at runtime. An example of this specialization using Quicksort showed performance improvement of about 25%.
    Companion to the 22nd Annual ACM SIGPLAN Conference on Object-Oriented Programming, Systems, Languages, and Applications, OOPSLA 2007, October 21-25, 2007, Montreal, Quebec, Canada; 01/2007
  • Source
    Elena Machkasova, Franklyn A. Turbak
    [Show abstract] [Hide abstract]
    ABSTRACT: We present a module calculus for studying a simple model of link-time compilation. The calculus is stratified into a term calculus, a core module calculus, and a linking calculus. At each level, we show that the calculus enjoys a computational soundness property: if two terms are equivalent in the calculus, then they have the same outcome in a smallstep operational semantics. This implies that any module transformation justified by the calculus is meaning preserving. This result is interesting because recursive module bindings thwart confluence at two levels of our calculus, and prohibit application of the traditional technique for showing computational soundness, which requires confluence. We introduce a new technique, based on properties we call lift and project, that uses a weaker notion of confluence with respect to evaluation to establish computational soundness for our module calculus. We also introduce the weak distributivity property for a transformation T operating on modules D1 and D2 linked by T (D1 D2 ) = T (T (D1 ) T (D2 )). We argue that this property finds promising candidates for link-time optimizations.
    12/2003;
  • Source
    Elena Machkasova, Franklyn A. Turbak
    [Show abstract] [Hide abstract]
    ABSTRACT: We present a call-by-value module calculus that serves as a framework for formal reasoning about simple module transformations. The calculus is stratified into three levels: a term calculus, a core module calculus, and a linking calculus. At each level, we define both a calculus reduction relation and a small-step operational semantics and relate them by a computational soundness property: if two terms are equivalent in the calculus, then they have the same observable outcome in the operational semantics. This result is interesting because recursive module bindings thwart confluence at two levels of our calculus and prohibit application of the traditional technique for showing computational soundness, which requires confluence (in addition to other properties, the most important being standardization).
    09/2001;
  • Source
    Emily Christiansen, Elena Machkasova
    [Show abstract] [Hide abstract]
    ABSTRACT: This paper focuses on meaning preservation of transformations on a system of mutually de-pendent program components, which models modular programming. Collections of such components are called records. The term program transformations refers to changes that compilers and similar tools make to a program to improve its performance and/or read-ability. Such transformations are meaning-preserving if they preserve the behavior of the program. While commonly performed transformations are well tested and widely believed to be meaning-preserving, precise formal proofs of meaning preservation are tedious and rarely done in practice. Optimized programs may have unexpected changes in their be-havior due to optimizations. Formal approaches to meaning preservation could lead to more reliable software without sacrificing the program's efficiency. In this paper, we give a formal system for describing program modules and prove some of its important properties. Records represent collections of components which may depend on each other, with possible mutual recursion. Records can be evaluated or reduced (i.e. optimized). Evalu-ation steps model the way the program would be evaluated. Transformation steps happen only during optimization. In this paper we introduce the necessary formal framework and prove two important properties: confluence of evaluation and that a non-evaluation step preserves the state of a term. Confluence of evaluation means that the result of evaluation of a record does not depend on a specific order of evaluation. The state of a term shows whether the term can be evaluated, and in the case that it can not be evaluated further, what value it has. Confluence of evaluation and preserving the state of a term are necessary fundamental properties for proving meaning preservation of the system of records.
  • Source
    Steve Caudill, Elena Machkasova
    [Show abstract] [Hide abstract]
    ABSTRACT: Traditionally, compilers perform a dual task: they transform a program from the source code (such as C or C++) to machine code, and also optimize the program to make it run faster. Common optimizations include constant propagation and folding, method inlining, dead code elimination, and many others. Java compliers are different from C or C++ compilers: most Java compliers transform Java source code into platform-independent byte code which is later executed by Java Virtual Machine (JVM), usually equipped with a Just-In-Time compiler (JIT) to compile byte code to native machine code on the fly. In this setup, program optimizations can be performed at two levels: by the compiler (while converting Java code into byte code) and by JVM when byte code is compiled to native code as the program is executed. In this project, we investigate optimizations that are performed by the compiler, javac, and by JVM. We compare our test program efficiency with that of a non-optimized program in order to detect optimizations being performed on the programs, and to determine at which level they are performed. Our testing programs are specifically designed to detect individual program optimizations. Our research is a work in progress. We present the current results and discuss techniques for detecting optimizations and also for determining whether these optimizations are performed at compile time or run time.
  • Source
    Fernando Trinciante, Isaac Sjoblom, Elena Machkasova
    [Show abstract] [Hide abstract]
    ABSTRACT: Java generic types allow a programmer to create parameterized data structures and methods. For instance, a generic Stack type may be used for integers in one instance and for strings in another. Java compiler guarantees in this case that integers and strings are not mixed in the same stack. We study runtime efficiency of a certain inheritance pattern related to Java generic types: narrowing of a type bound. This pattern takes place when a generic type allows a more restricted type of elements than its supertype. We examine a slowdown caused by this pattern for some method calls and study the reasons for it. Knowing cases when the slowdown takes place and the reasons for it would allow software developers to make informed choices when using generic types in their programs.
  • Brian Goslinga, Elena Machkasova
    [Show abstract] [Hide abstract]
    ABSTRACT: 1 Abstract Given a regular language, we can find a rational generating function that enu-merates it. When considering the converse, we discover that there are rational generating functions that look like they may enumerate a regular language but do not. We consider the problem of finding languages that are enumerated by these rational generating functions. 2 Generating functions Generating functions are a way to represent infinite sequences. Let A(x) be a function with Taylor series expansion ∞ n=0 a n x n . We say that the sequence of Taylor series coefficients, (a n), is encoded by the generating function 1 A(x). We denote the operation of getting a n from A(x) as [x n ]A(x). We do not care about the convergence of the series as we treat the series as a formal power series only. If a generating function is a rational function, then the generating function is called a rational generating function. A classic rational generating function is 1 1 − x − x 2 = 1 + x + 2x 2 + 3x 3 + 5x 4 + 8x 5 + . . . (1) This function generates the Fibonacci numbers because the coefficients of the series are 1, 1, 2, 3, 5, 8, . . .. As we can also see from this example, the series expansion of this generating function has non-negative coefficients even though the generating function had negative coefficients. 1 Strictly speaking A(x) is an ordinary generating function. The other types of generating functions are not used in this paper. 1 3 Linear recurrences A linear recurrence relation defines a sequence recursively where the terms after a fixed number of seed values are a linear combination of a finite number of previous entries [5]. In particular, we shall be considering linear recurrences of the form a n = c 1 a n−1 +c 2 a n−2 +. . .+c d a n−d (where for all i, c i is some constant) and the seed values are a 0 , . . . , a d−1 . For example, the Fibonacci numbers, 1, 1, 2, 3, 5, . . ., are defined by the linear recurrence relation F 0 = 1, F 1 = 1, F n = F n−1 + F n−2 . Here the seed values are F 0 and F 1 . A property of these linear recurrences is that the sequences they produce are the same as those generated by rational generating functions [5]. Lemma 3.1. The coefficients of the formal power series associated to a rational function f(x) can be realized as a linear recurrence and conversely for any series generated by a linear recurrence there is a rational function that generates that sequence.
  • Source
    Elena Machkasova, Emily Christiansen