Conference Paper

It's Alive! Continuous Feedback in UI Programming

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

Live programming allows programmers to edit the code of a running program and immediately see the effect of the code changes. This tightening of the traditional edit-compile-run cycle reduces the cognitive gap between program code and execution, improving the learning experience of beginning programmers while boosting the productivity of seasoned ones. Unfortunately, live programming is difficult to realize in practice as imperative languages lack welldefined abstraction boundaries that make live programming responsive or its feedback comprehensible. This paper enables live programming for user interface programming by cleanly separating the rendering and non-rendering aspects of a UI program, allowing the display to be refreshed on a code change without restarting the program. A type and effect system formalizes this separation and provides an evaluation model that incorporates the code update step. By putting live programming on a more formal footing, we hope to enable critical and technical discussion of live programming systems.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... When the programs are developed in the web-based integrated development environments (WIDEs), the gap between the development and runtime environments can be eliminated. TouchDevelop [1] already achieves this in the web browser – anybody can pause the execution, navigates to the code editor and modify the appearance of the graphical applications. While TouchDevelop still requires explicit text-based programming, this implies potential of Live Programming techniques to be applied to end-user customization of applications. ...
... Other Live Programming environments can be easily extended to support Live Tuning interaction. In the TouchDevelop environment [1], Live Tuning can be implemented as support for direct manipulation of the GUI elements. Each graphical operation updates parameters in the text-based code defining positions, layouts, font sizes, and other graphical properties of the GUI elements. ...
... The combination of Live Programming and WIDEs (e.g. TouchDevelop [1], TextAlive [20], and f3.js [21]) provides maximum flexibility, allowing to edit the program during its runtime without losing the context. The vision of making every tangible software component editable in place has been long-awaited, and we believe that the combination has finally realized such dream. ...
Conference Paper
Full-text available
Live Programming allows programmers to gain information about the program continuously during its development. While it has been implemented in various integrated devel-opment environments (IDEs) for programmers, its interac-tion techniques such as slider widgets for continuous param-eter tuning are comprehensible for people without any prior knowledge of programming and have been widely used for a long time. In this paper, we aim to introduce prior work on Live Programming research from the interaction point of view and relate it to Human-Computer Interaction research. We then name the subset of Live Programming interaction that only involves changes in constant values “Live Tuning.” Our example IDEs that implement both Live Programming and Live Tuning interactions are showcased, followed by the discussion on the possible future direction of programming experience (PX) research.
... Live programming [15,55] aims to free developers from the "edit-compile-run" cycle and allows them to change programs at runtime and get immediate feedback on the change. Often, a form of live programming is supported by several existing programming languages and integrated development environments (IDEs) (e.g., [34,56,57,62,71]), and its benefits and utility are discussed in several studies (e.g., [16,51,54]). ...
... Different programming languages provide different levels of support for live programming. For instance, many programming languages only support fix-and-continue [15] which allows only a limited set of code changes excluding, for example, state transfer [41]. A change to a model element may require runtime updates that are not supported by the target language, and the lack of support for state transfer typically requires restarting the execution of the model for the effects of the change to become visible. ...
... Examples of graphical formalisms are VIVA [78] and Flogo [38] and textual formalisms are ElmScript [22] and Smalltalk [34]. Some usability work has focused on optimization of edit latency such as incremental compilation [54] and safe and efficient hot swapping (e.g., [15,17,29]). ...
Article
Full-text available
Live modeling has been recognized as an important technique to edit behavioral models while being executed and helps in better understanding the impact of a design choice. In the context of Model-driven Development (MDD) models can be executed by interpretation or by the translation of models into existing programming languages, often by code generation. This work is concerned with the support of live modeling in the context of state machine models when they are executed by code generation. To this end, we propose an approach that is completely independent of any live programming support offered by the target language. This independence is achieved with the help of a model transformation which equips the model with support for features which are required for live modeling. A subsequent code generation then produces a self-reflective program that allows changes to the model elements at runtime (through synchronization of design and runtime models). We have applied the approach in the context of UML-RT and created a prototype (Live-UMLRT) that provides a full set of services for live modeling of UML-RT state machines such as re-execution, adding/removing states and transitions, and adding/removing action code. We have evaluated the prototype on several use-cases. The evaluation shows that (1) generation of a self-reflective and model instrumentation can be carried out with reasonable performance, and (2) our approach can apply model changes to the running execution faster than the standard approach that depends on the live programming support of the target language.
... Live Programming (18) Real-time Programming and the Big Ideas of Computational Literacy [13] (17) VIVA: A Visual Language for Image Processing [36] (15) Smalltalk-80: The Language and Its Implementation [11] (13) It's Alive! Continuous Feedback in UI Programming [3] (13) Living It Up with a Live Programming Language [24] Exploratory Programming (10) Smalltalk-80: The Language and its Implementation [11] (7) Smalltalk and Exploratory Programming [30] (5) Using Prototypical Objects to Implement Shared Behavior in Object-Oriented Systems [21] (4) An Efficient Implementation of SELF -A Dynamically-Typed Object-Oriented Language Based on Prototypes [6] (4) The Design and Implementation of the Self Compiler, an Optimizing Compiler for Object-Oriented Programming Languages [5] (4) Back to the Future: The Story of Squeak, a Practical Smalltalk Written in Itself [17] (4) Self: The Power of Simplicity [40] Live Coding (33) Live Coding in Laptop Performance [8] (15) The Programming Language as a Musical Instrument [1] (14) Live Coding of Consequence [7] (14) Gibber: Live Coding Audio in the Browser [28] (13) Aa-Cell in Practice: An Approach to Musical Live Coding [32] (13) Live Algorithm Programming and a Temporary Organisation for its Promotion [41] We retrieved less than a third of the references through Semantic Scholar. We extracted the remaining references in a semi-automated process directly from the publications. ...
... Beyond the mere numbers, the mentions of visual languages are interesting as they all appear in the context of one of two general arguments. The first one is that visual languages have already supported liveness for a while but the transfer to textual languages was difficult (see for example [3]). The second argument is that visual languages do support liveness but only have a limited programming model (see for example [8]). ...
... Further, the keywords and the most prominent publications suggest that a lot of work in the exploratory corpus is about Smalltalk and SELF systems. 3 There are less hints for a focus on few systems in the live programming and live coding corpora. ...
Article
Full-text available
Various programming tools, languages, and environments give programmers the impression of changing a program while it is running. This experience of liveness has been discussed for over two decades and a broad spectrum of research on this topic exists. Amongst others, this work has been carried out in the communities around three major ideas which incorporate liveness as an important aspect: live programming, exploratory programming, and live coding. While there have been publications on the focus of each particular community, the overall spectrum of liveness across these three communities has not been investigated yet. Thus, we want to delineate the variety of research on liveness. At the same time, we want to investigate overlaps and differences in the values and contributions between the three communities. Therefore, we conducted a literature study with a sample of 212 publications on the terms retrieved from three major indexing services. On this sample, we conducted a thematic analysis regarding the following aspects: motivation for liveness, application domains, intended outcomes of running a system, and types of contributions. We also gathered bibliographic information such as related keywords and prominent publications. Besides other characteristics the results show that the field of exploratory programming is mostly about technical designs and empirical studies on tools for general-purpose programming. In contrast, publications on live coding have the most variety in their motivations and methodologies with a majority being empirical studies with users. As expected, most publications on live coding are applied to performance art. Finally, research on live programming is mostly motivated by making programming more accessible and easier to understand, evaluating their tool designs through empirical studies with users. In delineating the spectrum of work on liveness, we hope to make the individual communities more aware of the work of the others. Further, by giving an overview of the values and methods of the individual communities, we hope to provide researchers new to the field of liveness with an initial overview.
... Live Programming (18) Real-time Programming and the Big Ideas of Computational Literacy [13] (17) VIVA: A Visual Language for Image Processing [36] (15) Smalltalk-80: The Language and Its Implementation [11] (13) It's Alive! Continuous Feedback in UI Programming [3] (13) Living It Up with a Live Programming Language [24] Exploratory Programming (10) Smalltalk-80: The Language and its Implementation [11] (7) Smalltalk and Exploratory Programming [30] (5) Using Prototypical Objects to Implement Shared Behavior in Object-Oriented Systems [21] (4) An Efficient Implementation of SELF -A Dynamically-Typed Object-Oriented Language Based on Prototypes [6] (4) The Design and Implementation of the Self Compiler, an Optimizing Compiler for Object-Oriented Programming Languages [5] (4) Back to the Future: The Story of Squeak, a Practical Smalltalk Written in Itself [17] (4) Self: The Power of Simplicity [40] Live Coding (33) Live Coding in Laptop Performance [8] (15) The Programming Language as a Musical Instrument [1] (14) Live Coding of Consequence [7] (14) Gibber: Live Coding Audio in the Browser [28] (13) Aa-Cell in Practice: An Approach to Musical Live Coding [32] (13) Live Algorithm Programming and a Temporary Organisation for its Promotion [41] We retrieved less than a third of the references through Semantic Scholar. We extracted the remaining references in a semi-automated process directly from the publications. ...
... Beyond the mere numbers, the mentions of visual languages are interesting as they all appear in the context of one of two general arguments. The first one is that visual languages have already supported liveness for a while but the transfer to textual languages was difficult (see for example [3]). The second argument is that visual languages do support liveness but only have a limited programming model (see for example [8]). ...
... Further, the keywords and the most prominent publications suggest that a lot of work in the exploratory corpus is about Smalltalk and SELF systems. 3 There are less hints for a focus on few systems in the live programming and live coding corpora. ...
Preprint
Full-text available
Various programming tools, languages, and environments give programmers the impression of changing a program while it is running. This experience of liveness has been discussed for over two decades and a broad spectrum of research on this topic exists. Amongst others, this work has been carried out in the communities around three major ideas which incorporate liveness as an important aspect: live programming, exploratory programming, and live coding. While there have been publications on the focus of each particular community, the overall spectrum of liveness across these three communities has not been investigated yet. Thus, we want to delineate the variety of research on liveness. At the same time, we want to investigate overlaps and differences in the values and contributions between the three communities. Therefore, we conducted a literature study with a sample of 212 publications on the terms retrieved from three major indexing services. On this sample, we conducted a thematic analysis regarding the following aspects: motivation for liveness, application domains, intended outcomes of running a system, and types of contributions. We also gathered bibliographic information such as related keywords and prominent publications. Besides other characteristics the results show that the field of exploratory programming is mostly about technical designs and empirical studies on tools for general-purpose programming. In contrast, publications on live coding have the most variety in their motivations and methodologies with a majority being empirical studies with users. As expected, most publications on live coding are applied to performance art. Finally, research on live programming is mostly motivated by making programming more accessible and easier to understand, evaluating their tool designs through empirical studies with users. In delineating the spectrum of work on liveness, we hope to make the individual communities more aware of the work of the others. Further, by giving an overview of the values and methods of the individual communities, we hope to provide researchers new to the field of liveness with an initial overview.
... Users can set a breakpoint on a node (3). A sliding bar provides access to the previous states in the history of the graph (4) and an input field allows one to specify a query on previous state or as a conditional breakpoint (5). For illustration, we also show the case of multiple active dependency graphs (6) where colors indicate the performance of each node. ...
... Thanks to the purity of ELM, users can play the animation backwards in time. Live programming [31,5] is about keeping the GUI in sync with code changes. In this line of work [32], McDirmid and Edwards explored the use of managed time for live programming: Application time is controlled by the runtime environment -an approach inspired by automatic memory control of garbage-collected VMs. ...
Conference Paper
Full-text available
Reactive programming is a recent programming technique that provides dedicated language abstractions for reactive software. Reactive programming relieves developers from manually updating outputs when the inputs of a computation change, it overcomes a number of well-know issues of the Observer design pattern, and it makes programs more comprehensible. Unfortunately, complementing the new paradigm with proper tools is a vastly unexplored area. Hence, as of now, developers can embrace reactive programming only at the cost of a more challenging development process. In this paper, we investigate a primary issue in the field: debugging programs in the reactive style. We analyze the problem of debugging reactive programs, show that the reactive style requires a paradigm shift in the concepts needed for debugging, and propose RP Debugging, a methodology for effectively debugging reactive programs. These ideas are implemented in Reactive Inspector, a debugger for reactive programs integrated with the Eclipse Scala IDE. Evaluation based on a controlled experiment shows that RP Debugging outperforms traditional debugging techniques.
... Testable null hypotheses would include the assumption of observing an equal number of errors caused by an implementation task, equal degrees of task completion, and equal time to completion compared to a fully immediate workflow. Additionally, we are interested whether programmers would spend a higher proportion of their time coding alongside a running program, which would indicate a higher continuity of feedback and thereby increase liveness according to multiple perspectives on what liveness means [4,26,27]. Negative results on these hypotheses might suggest that unmediated emergence of changes, despite being at continuous risk of breaking the program, is still the best feedback so far. ...
... Version Control Systems Version Control Systems (VCS) like Git , 3 Apache Subversion , 4 or Mercurial 5 are based on similar ideas, but their usage scenarios are different. One analogy that can be drawn is the correspondence of active Edit Transactions to revisions or commits in a VCS, deactivation to reverting a commit, and a staged but not yet active Edit Transaction to the working copy. ...
Article
Full-text available
Live programming environments enable programmers to edit a running program and obtain immediate feedback on each individual change. The liveness quality is valued by programmers to help work in small steps and continuously add or correct small functionality while maintaining the impression of a direct connection between each edit and its manifestation at run-time. Such immediacy may conflict with the desire to perform a combined set of intermediate steps, such as a refactoring, without immediately taking effect after each individual edit. This becomes important when an incomplete sequence of small-scale changes can easily break the running program. State-of-the-art solutions focus on retroactive recovery mechanisms, such as debugging or version control. In contrast, we propose a proactive approach: Multiple individual changes to the program are collected in an \emph{Edit Transaction}, which can be made effective if deemed complete. Upon activation, the combined steps become visible together. \emph{Edit Transactions} are capable of dynamic scoping, allowing a set of changes to be tested in isolation before being extended to the running application. This enables a live programming workflow with full control over change granularity, immediate feedback on tests, delayed effect on the running application, and coarse-grained undos. We present an implementation of \emph{Edit Transactions} along with \emph{Edit-Transaction}-aware tools in Squeak/Smalltalk. We asses this implementation by conducting a case study with and without the new tool support, comparing programming activities, errors, and detours for implementing new functionality in a running simulation. We conclude that workflows using \emph{Edit Transactions} have the potential to increase confidence in a change, reduce potential for run-time errors, and eventually make live programming more predictable and engaging.
... Additionally, the state of the running application is transparently transferred from the running program to the newly compiled version of the code, thereby removing the need to redo all operations up to the point in time where the change was made. While there are already several tools that support live programming, making a programming language "live" is carried out ad hoc and is referred to as a black art [5]. As such, it is difficult to transpose liveness techniques between languages. ...
... An example issue is the question how the state needs to be retained [12,40], and what needs to be recomputed [6]. Making an existing programming language live is often carried out through ad hoc modifications, often turning liveness into a black art [5]. With our approach, we provide an overview of the steps required to make a formalism live. ...
Article
Full-text available
To develop complex systems and tackle their inherent complexity, (executable) modelling takes a prominent role in the development cycle. But whereas good tool support exists for programming, tools for executable modelling have not yet reached the same level of functionality and maturity. In particular, live programming is seeing increasing support in programming tools, allowing users to dynamically change the source code of a running application. This significantly reduces the edit–compile–debug cycle and grants the ability to gauge the effect of code changes instantly, aiding in debugging and code comprehension in general. In the modelling domain, however, live modelling only has limited support for a few formalisms. In this paper, we propose a Multi-Paradigm Modelling approach to add liveness to modelling languages in a generic way, which is reusable across multiple formalisms. Live programming concepts and techniques are transposed to (domain-specific) executable modelling languages, clearly distinguishing between generic and language-specific concepts. To evaluate our approach, live modelling is implemented for three modelling languages, for which the implementation of liveness substantially differs. For all three cases, the exact same structured process was used to enable live modelling, which only required a “sanitization” operation to be defined.
... Recent works include languages such as McDirmid's SuperGlue [19], Jonathan Edward's subtext [20], and Glitch [21]. Microsoft's TouchDevelop has been modified to support Live Programming [22]. DeLine et al. proposed Tempe, a Live Programming environment for data analysis [23]. ...
... We saw that the most advanced approaches were sometimes not necessary, if simpler approaches such as inspecting could do the job. Technologically impressive tools such as the Whyline [59] have been developed, while the Debugger Canvas [66] or TouchDevelop [22] have been put in production. These tools are extremely useful and required a lot of effort to build. ...
Conference Paper
Full-text available
Live Programming environments allow programmers to get feedback instantly while changing software. Liveness is gaining attention among industrial and open-source communities; several IDEs offer high degrees of liveness. While several studies looked at how programmers work during software evolution tasks, none of them consider live environments. We conduct such a study based on an analysis of 17 programming sessions of practitioners using Pharo, a mature Live Programming environment. The study is complemented by a survey and subsequent analysis of 16 programming sessions in additional languages, e.g., JavaScript. We document the approaches taken by developers during their work. We find that some liveness features are extensively used, and have an impact on the way developers navigate source code and objects in their work.
... Lighttable [19] and Chrome DevTools provide limited instant previews akin to those presented in this paper, but without well specified recomputation model. Finally, work on keeping state during code edits [8,35] would be relevant for supporting streaming data. ...
... We capture that by adding an edge callsite(m, i) from x to o which indicates that x is the input variable of a function passes as the i th argument to the m member of the expression represented by the target node. We also add callsite(m, i) 8 as an edge from the node of the function. Figure 14 shows the revised dependency graph for o.m(λx → x). ...
... Lighttable [19] and Chrome DevTools provide limited instant previews akin to those presented in this paper, but without well specified recomputation model. Finally, work on keeping state during code edits [8,35] would be relevant for supporting streaming data. ...
... We capture that by adding an edge callsite(m, i) from x to o which indicates that x is the input variable of a function passes as the i th argument to the m member of the expression represented by the target node. We also add callsite(m, i) 8 as an edge from the node of the function. Figure 14 shows the revised dependency graph for o.m(λx → x). ...
Preprint
Full-text available
Context: A growing amount of code is written to explore and analyze data, often by data analysts who do not have a traditional background in programming, for example by journalists. Inquiry: The way such data anlysts write code is different from the way software engineers do so. They use few abstractions, work interactively and rely heavily on external libraries. We aim to capture this way of working and build a programming environment that makes data exploration easier by providing instant live feedback. Approach: We combine theoretical and applied approach. We present the \emph{data exploration calculus}, a formal language that captures the structure of code written by data analysts. We then implement a data exploration environment that evaluates code instantly during editing and shows previews of the results. Knowledge: We formally describe an algorithm for providing instant previews for the data exploration calculus that allows the user to modify code in an unrestricted way in a text editor. Supporting interactive editing is tricky as any edit can change the structure of code and fully recomputing the output would be too expensive. Grounding: We prove that our algorithm is correct and that it reuses previous results when updating previews after a number of common code edit operations. We also illustrate the practicality of our approach with an empirical evaluation and a case study. Importance: As data analysis becomes an ever more important use of programming, research on programming languages and tools needs to consider new kinds of programming workflows appropriate for those domains and conceive new kinds of tools that can support them. The present paper is one step in this important direction.
... Changing code in a live system will often produce new behavior and new objects, but it also results in some objects, methods or reference to become obsolete. A good example for such stale code [2] are dangling event listeners. A framework has to take care of not letting the old code and behavior get in the way of the new one. ...
... Giving live feedback while imperatively constructing UIs is easier to achieve and often studied. In "it's alive!", the approach is to separate "UI state from ordinary state, and the render code that builds UI state from ordinary code" [2]. In our approach, we also take advantage of having different kinds of state: we treat HTML elements and attributes differently from JavaScript object state. ...
Conference Paper
Explorative and live development environments flourish when they can impose restrictions. Forcing a specific programming language or framework, the environment can better enhance the experience of editing code with immediate feedback or direct manipulation. Lively Kernel's user interface (UI) framework Morphic provides such a development experience when working with graphical objects in direct way giving immediate feedback during development. Our new development environment Lively4 achieves a similar development experience, but targeting general HTML elements. Web Components as a new Web standard provide a very powerful abstraction mechanism. Plain HTML elements provide direct building blocks for tools and applications. Unfortunately, Web Components miss proper capabilities to support run-time development. To address this issue, we use object migration to provide immediate feedback when editing UI code. The approach is evaluated by discussing known problems, resulting best practices and future work.
... We have developed a simple reference implementation, HZ, of Hazelnut extended with sum types as described in Sec. 4. In order to reach a wide audience, we decided to implement HZ in the web browser. To take advantage of a mature implementation of the FRP model, we chose to implement HZ using OCaml 5 , the OCaml React library 6 , and the js_of_ocaml compiler and associated libraries [4] 7 . ...
... This notion of reduction commuting with instantiation has also been studied in other calculi [52]. Being able to edit a running program also has connections to less formal work on "live programming" interfaces [7,33]. ...
Conference Paper
Structure editors allow programmers to edit the tree structure of a program directly. This can have cognitive benefits, particularly for novice and end-user programmers. It also simplifies matters for tool designers, because they do not need to contend with malformed program text. This paper introduces Hazelnut, a structure editor based on a small bidirectionally typed lambda calculus extended with holes and a cursor. Hazelnut goes one step beyond syntactic well-formedness: its edit actions operate over statically meaningful incomplete terms. Naïvely, this would force the programmer to construct terms in a rigid “outside-in” manner. To avoid this problem, the action semantics automatically places terms assigned a type that is inconsistent with the expected type inside a hole. This meaningfully defers the type consistency check until the term inside the hole is finished. Hazelnut is not intended as an end-user tool itself. Instead, it serves as a foundational account of typed structure editing. To that end, we describe how Hazelnut’s rich metatheory, which we have mechanized using the Agda proof assistant, serves as a guide when we extend the calculus to include binary sum types. We also discuss various interpretations of holes, and in so doing reveal connections with gradual typing and contextual modal type theory, the Curry-Howard interpretation of contextual modal logic. Finally, we discuss how Hazelnut’s semantics lends itself to implementation as an event-based functional reactive program. Our simple reference implementation is written using js_of_ocaml.
... Another recently proposed yet heavily influential theme for notebooks is called Live Programming [2]. This concept enables more fluid problem solving compared to "edit-compiledebug" style programming. ...
... One of the prominent examples for Live Programming is spreadsheets. Data and formulae can be edited and the effects of those edits can be observed immediately with spreadsheets [2]. ...
Conference Paper
Recently data analytics notebooks are becoming attractive tool for data science experiments. While data analytics notebooks have been frequently used for batch analytics applications there are multiple unique problems which need to be addressed when they are used for online analytics scenarios. Issues such as mapping the event processing model into notebooks, summarizing data streams to enable visualizations, scalability of distributed event processing pipelines in notebook servers remain as some of the key issues to be solved. As a solution in this demonstration we present an implementation of event processing paradigm in a notebook environment. Specifically, we implement WSO2 Data Analytics Server (DAS)'s event processor in Apache Zeppelin notebook environment. We first demonstrate how an event processing network could be implemented in a stream processing notebook itself. Second, we demonstrate how such network could be extended for distributed stream processing scenario using WSO2 DAS and Apache Storm. Also we discuss about various improvements which need to be done at the user interface aspects to develop stream processing network in such notebook environment.
... From a conceptual point of view, a live interaction helps to better understand and manage complex problems. LP is also suitable for exploratory stages since it speeds up development by reducing offline compilation steps [10,20]. In particular, Burckhardt et. ...
... In particular, Burckhardt et. al [10] see the classical edit-compile-run cycle as one reason for the gap between the program text and the perception of its effects. ...
Conference Paper
Full-text available
Modern development environments promote live programming (LP) mechanisms because it enhances the development experience by providing instantaneous feedback and interaction with live objects. LP is typically supported with advanced reflective techniques within dynamic languages. These languages run on top of Virtual Machines (VMs) that are built in a static manner so that most of their components are bound at compile time. As a consequence, VM developers are forced to work using the traditional edit-compile-run cycle, even when they are designing LP-supporting environments. In this paper we explore the idea of bringing LP techniques to the VM domain for improving their observability, evolution and adaptability at run-time. We define the notion of fully reflective execution environments (EEs), systems that provide reflection not only at the application level but also at the level of the VM. We characterize such systems, propose a design, and present Mate v1, a prototypical implementation. Based on our prototype, we analyze the feasibility and applicability of incorporating reflective capabilities into different parts of EEs. Furthermore, the evaluation demonstrates the opportunities such reflective capabilities provide for unanticipated dynamic adaptation scenarios, benefiting thus, a wider range of users.
... To support developers' work within edit-run cycles, researchers have designed a variety of tools. Live programming environments [5] aim to improve developers' productivity by merging the edit and run steps into a single step [6], allowing developers to edit and run the program concurrently [7]- [12]. Live programming environments support developers within the edit step by generating code snippets using examples provided by developers [13], [14] or by supporting direct manipulation of the output [15], [16]. ...
... Live programming tools offer direct support for edit-run cycles [7]- [12]. With live programming tools, developers edit their program and the tool automatically compiles, runs, and present the output [8], [19]. ...
Preprint
Full-text available
As developers program and debug, they continuously edit and run their code, a behavior known as edit-run cycles. While techniques such as live programming are intended to support this behavior, little is known about the characteristics of edit-run cycles themselves. To bridge this gap, we analyzed 28 hours of programming and debugging work from 11 professional developers which encompassed over three thousand development activities. We mapped activities to edit or run steps, constructing 581 debugging and 207 programming edit-run cycles. We found that edit-run cycles are frequent. Developers edit and run the program, on average, 7 times before fixing a defect and twice before introducing a defect. Developers waited longer before again running the program when programming than debugging, with a mean cycle length of 3 minutes for programming and 1 minute for debugging. Most cycles involved an edit to a single file after which a developer ran the program to observe the impact on the final output. Edit-run cycles which included activities beyond edit and run, such as navigating between files, consulting resources, or interacting with other IDE features, were much longer, with a mean length of 5 minutes, rather than 1.5 minutes. We conclude with a discussion of design recommendations for tools to enable more fluidity in edit-run cycles.
... Developers are editing the running program, and immediately see the impact of any code change (e.g., on the output that the program produces). Hence, developers can make use of immediate feedback to steer the next editing steps [Burckhardt et al., 2013]. ...
... Techniques for live programming are designed to help developers to understand the behavior of their code at development time [34]. In particular, some of these techniques aim to minimize feedback loops by, for example, continuously displaying runtime state [15]. ...
Article
Full-text available
Context: Software development tools should work and behave consistently across different programming languages, so that developers do not have to familiarize themselves with new tooling for new languages. Also, being able to combine multiple programming languages in a program increases reusability, as developers do not have to recreate software frameworks and libraries in the language they develop in and can reuse existing software instead. Inquiry: However, developers often have a broad choice of tools, some of which are designed for only one specific programming language. Various Integrated Development Environments have support for multiple languages, but are usually unable to provide a consistent programming experience due to different language-specific runtime features. With regard to language integrations, common mechanisms usually use abstraction layers, such as the operating system or a network connection, which are often boundaries for tools and hence negatively affect the programming experience. Approach: In this paper, we present a novel approach for tool reuse that aims to improve the experience with regard to working with multiple high-level dynamic, object-oriented programming languages. As part of this, we build a multi-language virtual execution environment and reuse Smalltalk's live programming tools for other languages. Knowledge: An important part of our approach is to retrofit and align runtime capabilities for different languages as it is a requirement for providing consistent tools. Furthermore, it provides convenient means to reuse and even mix software libraries and frameworks written in different languages without breaking tool support. Grounding: The prototype system Squimera is an implementation of our approach and demonstrates that it is possible to reuse both development tools from a live programming system to improve the development experience as well as software artifacts from different languages to increase productivity. Importance: In the domain of polyglot programming systems, most research has focused on the integration of different languages and corresponding performance optimizations. Our work, on the other hand, focuses on tooling and the overall programming experience.
... Based on these liveness level and on the work of others [8,28,35], we expect programming environments and languages to improve the programming experience by providing immediate feedback on the validity of programs and code execution. This feedback should be meant to reduce the friction that is inherent in the communication between humans and machines, which is constrained by artificial languages and machines that still neither perceive the context of a conversation nor learn from interaction. ...
Conference Paper
Full-text available
While an integral part of all programming languages, the design of collection libraries is rarely studied. This work briefly reviews the collection libraries of 14 languages to identify possible design dimensions. Some languages have surprisingly few but versatile collections, while others have large libraries with many specialized collections. Based on the identified design dimensions, we argue that a small collection library with only a sequence, a map, and a set type are a suitable choice to facilitate exploratory programming. Such a design minimizes the number of decisions programmers have to make when dealing with collections, and it improves discoverability of collection operations. We further discuss techniques that make their implementation practical from a performance perspective. Based on these arguments, we conclude that languages which aim to support exploratory programming should strive for small and versatile collection libraries.
... Conditions can be tested to see if they are true or false. Live Programming [96][97][98], also found in most blocks programming languages, enables users to experience the outcome of a program by changing in real time -live -a running program. ...
Article
Full-text available
The blocks programming community has been preoccupied with identifying syntactic obstacles that keep novices from learning to program. Unfortunately, this focus is now holding back research from systematically investigating various technological affordances that can make programming more accessible. Employing approaches from program analysis, program visualization, and real-time interfaces can push blocks programming beyond syntax towards the support of semantics and even pragmatics. Syntactic support could be compared to checking spelling and grammar in word processing. Spell checking is relatively simple to implement and immediately useful, but provides essentially no support to create meaningful text. Over the last 25 years, I have worked to empower students to create their own games, simulations, and robots. In this time I have explored, combined, and evaluated a number of programming paradigms. Every paradigm including data flow, programming by example, and programming through analogies brings its own set of affordances and obstacles. Twenty years ago, AgentSheets combined four key affordances of blocks programming, and since then has evolved into a highly accessible Computational Thinking Tool. This article describes the journey to overcome first syntactic, then semantic, and most recently pragmatic, obstacles in computer science education.
... Microsoft's TouchDevelop [6,35] is another recent Web language design, intended to appeal to novices. TouchDevelop allows client code to manipulate distributed data structures directly, applying distributed-systems techniques automatically to enforce eventual consistency. ...
Article
The World Wide Web has evolved gradually from a document delivery platform to an architecture for distributed programming. This largely unplanned evolution is apparent in the set of interconnected languages and protocols that any Web application must manage. This paper presents Ur/Web, a domain-specific, statically typed functional programming language with a much simpler model for programming modern Web applications. Ur/Web's model is unified, where programs in a single programming language are compiled to other "Web standards" languages as needed; modular, supporting novel kinds of encapsulation of Web-specific state; and exposes simple concurrency, where programmers can reason about distributed, multithreaded applications via a mix of transactions and cooperative preemption. We give a tutorial introduction to the main features of Ur/Web, formalize the basic programming model with operational semantics, and discuss the language implementation and the production Web applications that use it.
... (2) Semantics: Live programming (Burckhardt et al., 2013;McDirmid, 2013;McDirmid., 2007) and similar approaches help users to understand the meaning of programs by illustrating the consequences of changes to programs. ...
Chapter
Full-text available
Computational Thinking is a fundamental skill for the twenty-first century workforce. This broad target audience, including teachers and students with no programming experience, necessitates a shift in perspective toward Computational Thinking Tools that not only provide highly accessible programming environments but explicitly support the Computational Thinking Process. This evolution is crucial if Computational Thinking Tools are to be relevant to a wide range of school disciplines including STEM, art, music, and language learning. Computational Thinking Tools must help users through three fundamental stages of Computational Thinking: problem formulation, solution expression, and execution/evaluation. This chapter outlines three principles, and employs AgentCubes online as an example, on how a Computational Thinking Tool provides support for these stages by unifying human abilities with computer affordances.
... This notion of reduction commuting with instantiation has also been studied in other calculi [40]. Being able to edit a running program also has connections to less formal work on "live programming" interfaces [5,23]. ...
Article
Full-text available
Structure editors allow programmers to edit the tree structure of a program directly. This can have cognitive benefits, particularly for novice and end-user programmers. It also simplifies matters for tool designers, because they do not need to contend with malformed program text. This paper introduces Hazelnut, a structure editor based on a small bidirectionally typed lambda calculus extended with holes and a cursor. Hazelnut goes one step beyond syntactic well-formedness: its edit actions operate over statically meaningful incomplete terms. Naïvely, this would force the programmer to construct terms in a rigid “outside-in” manner. To avoid this problem, the action semantics automatically places terms assigned a type that is inconsistent with the expected type inside a hole. This meaningfully defers the type consistency check until the term inside the hole is finished. Hazelnut is not intended as an end-user tool itself. Instead, it serves as a foundational account of typed structure editing. To that end, we describe how Hazelnut’s rich metatheory, which we have mechanized using the Agda proof assistant, serves as a guide when we extend the calculus to include binary sum types. We also discuss various interpretations of holes, and in so doing reveal connections with gradual typing and contextual modal type theory, the Curry-Howard interpretation of contextual modal logic. Finally, we discuss how Hazelnut’s semantics lends itself to implementation as an event-based functional reactive program. Our simple reference implementation is written using js_of_ocaml.
... More recently, the Web has started to affect programming technology too. Scores of programming languages compile to run on the Web [25] and there are several Web IDEs in widespread use [5,9,15,39,47,48,57,68,76,80]. This growing audience for Web IDEs, and correspondingly languages that run in the browser, includes professionals and students. ...
Article
Scores of compilers produce JavaScript, enabling programmers to use many languages on the Web, reuse existing code, and even use Web IDEs. Unfortunately, most compilers inherit the browser's compromised execution model, so long-running programs freeze the browser tab, infinite loops crash IDEs, and so on. The few compilers that avoid these problems suffer poor performance and are difficult to engineer. This paper presents Stopify, a source-to-source compiler that extends JavaScript with debugging abstractions and blocking operations, and easily integrates with existing compilers. We apply Stopify to ten programming languages and develop a Web IDE that supports stopping, single-stepping, breakpointing, and long-running computations. For nine languages, Stopify requires no or trivial compiler changes. For eight, our IDE is the first that provides these features. Two of our subject languages have compilers with similar features. Stopify's performance is competitive with these compilers and it makes them dramatically simpler. Stopify's abstractions rely on first-class continuations, which it provides by compiling JavaScript to JavaScript. We also identify sub-languages of JavaScript that compilers implicitly use, and exploit these to improve performance. Finally, Stopify needs to repeatedly interrupt and resume program execution. We use a sampling-based technique to estimate program speed that outperforms other systems.
... The literature cites a number of advantages of live programming. First is how the approach minimizes latency between programming and seeing its effect, aiding in the development process [41], what Burckhardt et al. [42] call the "temporal and perceptive gap". A second benefit relates to enabling the act of programming to become a form of performance in a way not possible in non-live environments [43]. ...
Conference Paper
As physical computing devices proliferate, researchers and educators push to make them more engaging to learners. One approach is to make the act of programming them more interactive and responsive via live programming so that program edits are immediately reflected in the behavior of the physical device. To understand the impact of live programming on interactions with physical computing devices, we conducted a comparative study where children ages 11-15 programmed a BBC micro:bit device using either the MicroBlocks live programming environment or MakeCode, the micro:bit default environment. Results show that MicroBlocks users spent more time interacting directly with the physical device while showing different patterns of interaction compared to MakeCode users. We also found variations in the differences between environments related to activity structures. This paper contributes to the growing body of literature on how the design of interfaces---like programming environments---for physical computing devices shapes emerging interaction patterns.
... Live programming [2] aims to free developers from the "edit-compile-run" cycle, and allows them to change programs at runtime and get immediate feedback on the change. Often, a form of live programming is supported by several existing programming languages and Integrated Development Environments (IDEs) (e.g., [3]), and its benefits and utility are discussed in several studies (e.g., [4], [5]). ...
Conference Paper
Full-text available
In the context of Model-driven Development (MDD) models can be executed by interpretation or by the translation of models into existing programming languages, often by code generation. This work presents Live-UMLRT, a tool that supports live modeling of UML-RT models when they are executed by code generation. Live-UMLRT is entirely independent of any live programming support offered by the target language. This independence is achieved with the help of a model transformation which equips the model with support for, e.g., debugging and state transfer both of which are required for live modeling. A subsequent code generation then produces a self-reflective program that allows changes to the model elements at runtime (through synchronization of design and runtime models). We have evaluated Live-UMLRT on several use cases. The evaluation shows that (1) code generation, transformation, and state transfer can be carried out with reasonable performance, and (2) our approach can apply model changes to the running execution faster than the standard approach that depends on the live programming support of the target language. A demonstration video: https://youtu.be/6GrR-Y9je7Y
... Sometimes only certain parts of the program need to be re-executed. For example, TouchDevelop cleverly separates GUI-generating code from other code so that GUI updates can be rendered quickly[5]. ...
Article
Full-text available
MIT App Inventor is a programming environment that lowers the barriers to creating mobile apps for Android devices, especially for people with little or no programming experience. App Inventor apps for a mobile device are constructed by arranging components with a WYSIWYG editor in a computer web browser, where the development computer is connected to the device by WiFi or USB. The behavior of the components is specified using a blocks-based graphical programming language. A key feature in making App Inventor accessible to beginning programmers is live programming: developers interact directly with the state of the evolving program as it is being constructed, and changes made in the web browser are realized instantaneously in the running app on the device. This paper describes the live programming features of App Inventor and explains how they are implemented.
... Developers are editing the running program, and immediately see the impact of any code change (e.g., on the output that the program produces). Hence, developers can make use of immediate feedback to steer the next editing steps [8]. Existing work on live programming has also stressed the importance of usable IDEs and development environments [27]. ...
Preprint
Full-text available
A unifying theme of many ongoing trends in software engineering is a blurring of the boundaries between building and operating software products. In this paper, we explore what we consider to be the logical next step in this succession: integrating runtime monitoring data from production deployments of the software into the tools developers utilize in their daily workflows (i.e., IDEs) to enable tighter feedback loops. We refer to this notion as feedback-driven development (FDD). This more abstract FDD concept can be instantiated in various ways, ranging from IDE plugins that implement feedback-driven refactoring and code optimization to plugins that predict performance and cost implications of code changes prior to even deploying the new version of the software. We demonstrate existing proof-of-concept realizations of these ideas and illustrate our vision of the future of FDD and cloud-based software development in general. Further, we discuss the major challenges that need to be solved before FDD can achieve mainstream adoption.
... Developers are editing the running program, and immediately see the impact of any code change (e.g., on the output that the program produces). Hence, developers can make use of immediate feedback to steer the next editing steps [8]. Existing work on live programming has also stressed the importance of usable IDEs and development environments [27]. ...
Preprint
Full-text available
A unifying theme of many ongoing trends in software engineering is a blurring of the boundaries between building and operating software products. In this paper, we explore what we consider to be the logical next step in this succession: integrating runtime monitoring data from production deployments of the software into the tools developers utilize in their daily workflows (i.e., IDEs) to enable tighter feedback loops. We refer to this notion as feedback-driven development (FDD). This more abstract FDD concept can be instantiated in various ways, ranging from IDE plugins that implement feedback-driven refactoring and code optimization to plugins that predict performance and cost implications of code changes prior to even deploying the new version of the software. We demonstrate existing proof-of-concept realizations of these ideas and illustrate our vision of the future of FDD and cloud-based software development in general. Further, we discuss the major challenges that need to be solved before FDD can achieve mainstream adoption.
Conference Paper
Typically, development of robot behavior entails writing the code, deploying it on a simulator or robot and running it for testing. If this feedback reveals errors, the programmer mentally needs to map the error in behavior back to the source code that caused it before being able to fix it. This process suffers from a large cognitive distance between the code and the resulting behavior, which slows down development and can make experimentation with different behaviors prohibitively expensive. In contrast, Live Programming tightens the feedback loop, minimizing cognitive distance. As a result, programmers benefit from an immediate connection with the program that they are making thanks to an immediate, ‘live’ feedback on program behavior. This allows for extremely rapid creation, or variation, of robot behavior and for dramatically increased debugging speed. To enable such Live Robot Programming, in this article we propose a language that provides for live programming of nested state machines and integrates in the Robot Operating System (ROS). We detail the language, named LRP, illustrate how it can be used to rapidly implement a behavior on a running robot and discuss the key points of the language that enables its liveness.
Conference Paper
The idea of live programming has been applied in various domains, including the exploration of simulations, general-purpose application development, and even live performance of music. As a result, different qualitative definitions of the term live programming exist. Often, these definitions refer to a sense of "directness" or "immediacy" regarding the responses of the system. However, most of them lack quantitative thresholds of this response time. Thus, we propose a survey of live programming environments to determine common response times the community regards as sufficient. In this paper, we discuss the design of an initial survey focusing on general-purpose live programming environments. We describe the selection process of systems and the benchmarking model to measure relevant time spans. We illustrate the potential outcomes of such a study with results from applying the benchmarking model to Squeak/Smalltalk and the Self environment. The results hint that a quick adaptation of the executable form might be a common feature of live programming environments.
Article
Most languages expose the computer's ability to globally read and write memory at any time. Programmers must then choreograph control flow so all reads and writes occur in correct relative orders, which can be difficult particularly when dealing with initialization, reactivity, and concurrency. Just as many languages now manage memory to unburden us from properly freeing memory, they should also manage time to automatically order memory accesses for us in the interests of comprehensibility, correctness, and simplicity. Time management is a general language feature with a large design space that is largely unexplored; we offer this perspective to relate prior work and guide future research. We introduce Glitch as a form of managed time that replays code for an appearance of simultaneous memory updates, avoiding the need for manual order. The key to such replay reaching consistent program states is an ability to reorder and rollback updates as needed, restricting the imperative model while retaining the basic concepts of memory access and control flow. This approach can also handle code to enable live programming that incrementally revises program executions in an IDE under arbitrary code changes.
Conference Paper
Software engineering tools and environments are migrating to the cloud, enabling more people to participate in programming from many more devices. To study this phenomenon in detail, we designed, implemented and deployed Touch Develop (url www.touchdevelop.com), a cloud-based integrated development environment (CIDE), which has been online for the past three years. Touch Develop combines a cross-platform browser-based IDE for the creation of mobile cloud apps, an online programmer/user community, and an app store. A central feature of Touch Develop is to track all program edits, versions, runtime information, bugs, as well user comments, questions and feedback in a single cloud-based repository that is available publicly via Web APIs. In this paper, we examine a key feature of Touch Develop that should be relevant to others creating CIDEs, namely the seamless integration of replicated workspaces, simplified version control and app publishing. An analysis of the Touch Develop repository shows that this combination of capabilities allows users to easily create new versions of apps from existing apps, make changes to other users' apps, and share their results from a variety of devices, including smartphones, tablets and traditional PCs.
Article
InterState is a new programming language and environment that addresses the challenges of writing and reusing user interface code. InterState represents interactive behaviors clearly and concisely using a combination of novel forms of state machines and constraints. It also introduces new language features that allow programmers to easily modularize and reuse behaviors. InterState uses a new visual notation that allows programmers to better understand and navigate their code. InterState also includes a live editor that immediately updates the running application in response to changes in the editor and vice versa to help programmers understand the state of their program. Finally, InterState can interface with code and widgets written in other languages, for example to create a user interface in InterState that communicates with a database. We evaluated the understandability of InterState's programming primitives in a comparative laboratory study. We found that participants were twice as fast at understanding and modifying GUI components when they were implemented with InterState than when they were implemented in a conventional textual event-callback style. We evaluated InterState's scalability with a series of benchmarks and example applications and found that it can scale to implement complex behaviors involving thousands of objects and constraints.
Conference Paper
We present a live, multiple-representation novice environment for probabilistic programming based on the Infer.NET language. When compared to a text-only editor in a controlled experiment on 16 participants, our system showed a significant reduction in keystrokes during introductory probabilistic programming exercises, and subsequently, a significant improvement in program description and debugging tasks as measured by task time, keystrokes and deletions.
Conference Paper
Programmers write source code that compiles to programs, and users execute the programs to benefit from their features. While issue-tracking systems help communication between these two groups of people, feature requests have usually been written in text with optional figures that follows community guidelines and needs human interpretation to understand what to implement in which part of the source code. To make this process more direct, intuitive, and efficient, a streamlined interaction design called "User-Generated Variables (UGV)" is proposed. First, the users can declare parameters that they want to tweak in existing programs without reading or understanding the source code. Then, the system turns the proposal into variable declarations in the relevant part of the source code. Finally, the programmers are notified of the proposal and can implement the actual features to reflect changes in the variable value. The proposed interaction is implemented in two existing Web-based Integrated Development Environments, and its user experience is briefly tested with eight users and programmers. Its technical requirements, limitations, and potentials are discussed. The content of this paper with live examples is available at http://junkato.jp/ugv.
Conference Paper
Aspects of live-programming that have originated with Lisp and Smalltalk systems have recently seen a renewed research and industrial interest due to their educational and productivity potential (Live workshops at ECOOP, ICSE, and SPLASH, live facilities for the Microsoft .NET, Java, Python, and Swift platforms). Especially in the case of visual modeling and simulation tools that are used by experts outside Informatics (such as ecologists, biologists, economists, epidemiologists, ...), this constant-feedback loop that live-systems provide can ease the development and comprehension of complex systems, via truly explorable environments. Unfortunately, taking the domain of Epidemiology as an example, we observe that the visual aspect of such systems offer no notion of modularity and thus exploration is limited only to small monolithic examples. In order to address this issue, we propose a model for modular visual exploration. This model is based on an extension of the OpenPonk platform targeting Kendrick, an domain-specific language (DSL) about epidemiology. Through this model, we were able to map the separation of concerns of the Kendrick DSL, in a live visual notation that supports modularity and exploration of part-whole hierarchies.
Conference Paper
Automation is one of the key solutions proposed and adopted by international Air Transport research programs to meet the challenges of increasing air traffic. For automation to be safe and usable, it needs to be suitable to the activity it supports, both when authoring it and when operating it. Here we present Vizir, a Domain-Specific Graphical Language and an Environment for authoring and operating airport automations. We used a participatory-design process with Air Traffic Controllers to gather requirements for Vizir and to design its features. Vizir combines visual interaction-oriented programming constructs with activity-related geographic areas and events. Vizir offers explicit human-control constructs, graphical substrates and means to scale-up with multiple automations. We propose a set of guidelines to inspire designers of similar usable hybrid human-automation systems.
Conference Paper
Live modeling enables modelers to incrementally update models as they are running and get immediate feedback about the impact of their changes. Changes introduced in a model may trigger inconsistencies between the model and its run-time state (e.g., deleting the current state in a statemachine); effectively requiring to migrate the run-time state to comply with the updated model. In this paper, we introduce an approach that enables to automatically migrate such run-time state based on declarative constraints defined by the language designer. We illustrate the approach using Nextep, a meta-modeling language for defining invariants and migration constraints on run-time state models. When a model changes, Nextep employs model finding techniques, backed by a solver, to automatically infer a new run-time model that satisfies the declared constraints. We apply Nextep to define migration strategies for two DSLs, and report on its expressiveness and performance.
Conference Paper
Live programming is an activity in which the programmer edits code while observing the result of the program. It has been exercised mainly for pedagogical and artistic purposes, where outputs of a program are not straightforwardly imagined. While most live programming environments so far target programs that explicitly generate visual or acoustic outputs, we believe that live programming is also useful for data structure programming, where the programmer often has a hard time to grasp a behavior of programs. However, it is not clear what features a live programming environment should provide for such kind of programs. In this paper, we present a design of live programming environment for data structure programming, identify the problems of synchronization and mental map preservation, and propose solutions based on a calling-context sensitive identification technique. We implemented a live programming environment called Kanon, and tested with 13 programmers.
Article
Full-text available
Sometimes, service clients repeat requests in a polling loop in order to refresh their view. However, such polling may be slow to pick up changes, or may increase the load unacceptably, in particular for composed services that disperse over many components. We present an alternative reactive polling API and reactive caching algorithm that combines the conceptual simplicity of polling with the efficiency of push-based change propagation. A reactive cache contains a summary of a distributed read-only operation and maintains a connection to its dependencies so changes can be propagated automatically. We first formalize the setting using an abstract calculus for composed services. Then we present a fault-tolerant distributed algorithm for reactive caching that guarantees eventual consistency. Finally, we implement and evaluate our solution by extending the Orleans actor framework, and perform experiments on two benchmarks in a distributed cloud deployment. The results show that our solution provides superior performance compared to polling, at a latency that comes close to hand-written change notifications.
Article
Full-text available
Live programming environments aim to provide programmers (and sometimes audiences) with continuous feedback about a program's dynamic behavior as it is being edited. The problem is that programming languages typically assign dynamic meaning only to programs that are complete, i.e. syntactically well-formed and free of type errors. Consequently, live feedback presented to the programmer exhibits temporal or perceptive gaps. This paper confronts this "gap problem" from type-theoretic first principles by developing a dynamic semantics for incomplete functional programs, starting from the static semantics for incomplete functional programs developed in recent work on Hazelnut. We model incomplete functional programs as expressions with holes, with empty holes standing for missing expressions or types, and non-empty holes operating as membranes around static and dynamic type inconsistencies. Rather than aborting when evaluation encounters any of these holes as in some existing systems, evaluation proceeds around holes, tracking the closure around each hole instance as it flows through the remainder of the program. Editor services can use the information in these hole closures to help the programmer develop and confirm their mental model of the behavior of the complete portions of the program as they decide how to fill the remaining holes. Hole closures also enable a fill-and-resume operation that avoids the need to restart evaluation after edits that amount to hole filling. Formally, the semantics borrows machinery from both gradual type theory (which supplies the basis for handling unfilled type holes) and contextual modal type theory (which supplies a logical basis for hole closures), combining these and developing additional machinery necessary to continue evaluation past holes while maintaining type safety. We have mechanized the metatheory of the core calculus, called Hazelnut Live, using the Agda proof assistant. We have also implemented these ideas into the Hazel programming environment. The implementation inserts holes automatically, following the Hazelnut edit action calculus, to guarantee that every editor state has some (possibly incomplete) type. Taken together with this paper's type safety property, the result is a proof-of-concept live programming environment where rich dynamic feedback is truly available without gaps, i.e. for every reachable editor state.
Article
A frequent programming pattern for small tasks, especially expressions, is to repeatedly evaluate the program on an input as its editing progresses. The Read-Eval-Print Loop (REPL) interaction model has been a successful model for this programming pattern. We present the new notion of Read-Eval-Synth Loop (RESL) that extends REPL by providing in-place synthesis on parts of the expression marked by the user. RESL eases programming by synthesizing parts of a required solution. The underlying synthesizer relies on a partial solution from the programmer and a few examples. RESL hinges on bottom-up synthesis with general predicates and sketching, generalizing programming by example. To make RESL practical, we present a formal framework that extends observational equivalence to non-example specifications. We evaluate RESL by conducting a controlled within-subjects user-study on 19 programmers from 8 companies, where programmers are asked to solve a small but challenging set of competitive programming problems. We find that programmers using RESL solve these problems with far less need to edit the code themselves and by browsing documentation far less. In addition, they are less likely to leave a task unfinished and more likely to be correct.
Conference Paper
Scores of compilers produce JavaScript, enabling programmers to use many languages on the Web, reuse existing code, and even use Web IDEs. Unfortunately, most compilers inherit the browser's compromised execution model, so long-running programs freeze the browser tab, infinite loops crash IDEs, and so on. The few compilers that avoid these problems suffer poor performance and are difficult to engineer. This paper presents Stopify, a source-to-source compiler that extends JavaScript with debugging abstractions and blocking operations, and easily integrates with existing compilers. We apply Stopify to ten programming languages and develop a Web IDE that supports stopping, single-stepping, breakpointing, and long-running computations. For nine languages, Stopify requires no or trivial compiler changes. For eight, our IDE is the first that provides these features. Two of our subject languages have compilers with similar features. Stopify's performance is competitive with these compilers and it makes them dramatically simpler. Stopify's abstractions rely on first-class continuations, which it provides by compiling JavaScript to JavaScript. We also identify sub-languages of JavaScript that compilers implicitly use, and exploit these to improve performance. Finally, Stopify needs to repeatedly interrupt and resume program execution. We use a sampling-based technique to estimate program speed that outperforms other systems.
Conference Paper
Full-text available
Dynamic software updating (DSU) systems allow programs to be updated while running, thereby permitting developers to add features and fix bugs without downtime. This paper introduces Kitsune, a new DSU system for C whose design has three notable features. First, Kitsune's updating mechanism updates the whole program, not individual functions. This mechanism is more flexible than most prior approaches and places no restrictions on data representations or allowed compiler optimizations. Second, Kitsune makes the important aspects of updating explicit in the program text, making the program's semantics easy to understand while minimizing programmer effort. Finally, the programmer can write simple specifications to direct Kitsune to generate code that traverses and transforms old-version state for use by new code; such state transformation is often necessary, and is significantly more difficult in prior DSU systems. We have used Kitsune to update five popular, open-source, single- and multi-threaded programs, and find that few program changes are required to use Kitsune, and that it incurs essentially no performance overhead.
Article
Full-text available
We present a new approach to programming languages for parallel computers that uses an effect system to discover expression scheduling constraints. This effect system is part of a 'kinded' type system with three base kinds: types, which describe the value that an expression may return; effects, which describe the side-effects that an expression may have; and regions, which describe the area of the store in which side-effects may occur. Types, effects and regions are collectively called descriptions. Expressions can be abstracted over any kind of description variable -- this permits type, effect and region polymorphism. Unobservable side-effects can be masked by the effect system; an effect soundness property guarantees that the effects computed statically by the effect system are a conservative approximation of the actual side-effects that a given expression may have. The effect system we describe performs certain kinds of side-effect analysis that were not previously feasible. Experimental data from the programming language FX indicate that an effect system can be used effectively to compile programs for parallel computers.
Article
Full-text available
The world is experiencing a technology shift. In 2011, more touchscreen-based mobile devices like smartphones and tablets will be sold than desktops, laptops, and netbooks combined. In fact, in many cases incredibly powerful and easy-to-use smart phones are going to be the first and, in less developed countries, possibly the only computing devices which virtually all people will own, and carry with them at all times. Furthermore, mobile devices do not only have touchscreens, but they are also equipped with a multitude of sensors, such as location information and acceleration, and they are always connected to the cloud. TouchDevelop is a novel application creation environment for anyone to script their smartphones anywhere -- you do not need a separate PC. TouchDevelop allows you to develop mobile device applications that can access your data, your media, your sensors and allows using cloud services including storage, computing, and social networks. TouchDevelop targets students, and hobbyists, not necessarily the professional developer. Typical TouchDevelop applications are written for fun, or for personalizing the phone. TouchDevelop's typed, structured programming language is built around the idea of only using a touchscreen as the input device to author code. It has built-in primitives which make it easy to access the rich sensor data available on a mobile device. In our vision, the state of the program is automatically distributed between mobile clients and the cloud, with automatic synchronization of data and execution between clients and cloud, liberating the programmer from worrying (or even having to know about) the details. We report on our experience with our first prototype implementation for the Windows Phone 7 platform, which already realizes a large portion of our vision. It is available on the Windows Phone Marketplace.
Conference Paper
Full-text available
Representing programs as text strings makes programming harder then it has to be. The source text of a program is far removed from its behavior. Bridging this conceptual gulf is what makes programming so inhumanly difficult -- we are not compilers. Subtext is a new medium in which the representation of a program is the same thing as its execution. Like a spreadsheet, a program is visible and alive, constantly executing even as it is edited. Program edits are coherent semantic transformations.The essence of this new medium is copying. Programs are constructed by copying and executed by copy flow: the projection of changes through copies. The simple idea of copying develops into a rich theory of higher-order continual copying of trees. Notably absent are symbolic names, the workhorse of textual notation, replaced by immediately-bound explicit relationships. Subtext unifies traditionally distinct programming tools and concepts, and enables some novel ones. Ancestral structures are a new primitive data type that combines the features of lists and records, along with unproblematic multiple inheritance. Adaptive conditionals use first-class program edits to dynamically adapt behavior.A prototype implementation shows promise, but calls for much further research. Subtext suggests that we can make programming radically easier, if we are willing to be radical.
Conference Paper
Full-text available
Morphic is a user interface construction environment that strives to embody directness and liveness. Directness means a user interface designer can initiate the process of examining or changing the attributes, structure, and behavior of user interface components by pointing at their graphical representations directly. Liveness means the user interface is always active and reactive-+bjects respond to user actions, animations run, layout happens, and information displays update continuously. Four implementation techniques work together to support directness and liveness in Morphic: structural reification, layout reification, ubiquitous animation, and live editing.
Article
Full-text available
Lisp systems have been used for highly interactive programming for more than a decade During that time, special properties of the Lisp language (such as program/ data equivalence) have enabled a certain style of interactive programming to develop, characterized by powerful interactive support for the programmer, nonstandard program structures, and nonstandard program development methods. The paper summarazes the LISP style of interactive programming for readers outside the LisP community, describes those propertms of LisP systems that were essential for the development of this style, and discusses some current and not yet resolved issues
Article
Full-text available
The directness, immediacy, and simplicity of visual programming languages are appealing. The question is, can VPLs be effectively applied to large scale programming problems while retaining these characteristics. In scaling up, the problem is how to expand applicability without sacrificing the goals of better logic expression and understanding. From a size standpoint, scaling up refers to the programmer's ability to apply VPLs in larger programs. Such programs range from those requiring several days' work by a single programmer to programs requiring months of work, large programming teams, and large data structures. From a problem domain standpoint, scaling up refers to suitability for many kinds of problems. These range from visual application domains-such as user interface design or scientific visualization-to general purpose programming in such diverse areas as financial planning, simulations, and real time applications with explicit timing requirements. To illustrate the scaling up problem, we discuss nine major subproblems and describe emerging solutions from existing VPL systems. First, we examine representation issues, including static representation, screen real estate, and documentation. Next, we examine programming language issues-procedural abstraction, interactive visual data abstraction, type checking, persistence, and efficiency. Finally, we look at issues beyond the coding process
Article
Full-text available
A new software system, called Pure Data, is in the early stages of development. Its design attempts to remedy some of the deficiencies of the Max program while preserving its strengths. The most important weakness of Max is the difficulty of maintaining compound data structures of the type that might arise when analyzing and resynthesizing sounds or when recording and modifying sequences of events of many different types. Also, it has proved hard to integrate non-audio signals (video, for instance, and also audio spectra) into Max's rigid "tilde object" system. Finally, the whole issue of maintaining two separate copies of all data structures (one to edit and one to access in real time) has caused much confusion and difficulty. Pd's working prototype attempts to simplify the data structures in Max to make them more readily combined into novel user-defined data structures. Also, the relationship between the graphical process and the real-time one (which is handled in one way on the Maci...
Article
Full-text available
An increasingly common characteristic in visual programming languages (VPLs) is level 4 liveness---the constant monitoring of the system state with continuous redisplay as events arrive and computations progress. However, level 4 liveness can be expensive. In this paper, we present an implementation method that supports level 4 liveness in declarative VPLs, ensuring without "unreasonable" cost that all values on the screen are correctly updated as computations progress. The method is especially well-suited for the growing class of declarative VPLs that display continuously time-varying calculations and graphics, such as GUI specification VPLs, event-based or reactive VPLs, scientific visualization VPLs, or graphical simulation VPLs. 1. Introduction Many declarative visual programming languages (VPLs) today employ immediate visual feedback to support programming. To categorize the immediacy of feedback provided, Tanimoto coined the term "liveness," which categorizes the immediacy of s...
Article
Full-text available
SELF is an object-oriented language for exploratory programming based on a small number of simple and concrete ideas: prototypes, slots, and behavior. Prototypes combine inheritance and instantiation to provide a framework that is simpler and more flexible than most object-oriented languages. Slots unite variables and procedures into a single construct. This permits the inheritance hierarchy to take over the function of lexical scoping in conventional languages. Finally, because SELF does not distinguish state from behavior, it narrows the gaps between ordinary objects, procedures, and closures. SELF's simplicity and expressiveness offer new insights into object-oriented computation. To thine own self be true. — William Shakespeare laggy
Conference Paper
An increasingly common characteristic in visual programming languages (VPLs) is level 4 liveness-the constant monitoring of the system state with continuous redisplay as events arrive and computations progress. However, level 4 liveness can be expensive. We present an implementation method that supports level 4 liveness in declarative VPLs, ensuring without “unreasonable” cost that all values on the screen are correctly updated as computations progress. The method is especially well suited for the growing class of declarative VPLs that display continuously time varying calculations and graphics, such as GUI specification VPLs, event based or reactive VPLs, scientific visualization VPLs, or graphical simulation VPLs
Conference Paper
Conference Paper
In a system made up of many modules, each managing its own peculiar types of data structures, it is often necessary to update one of the modules so as to provide new features or an improvement in the internal organization. If the interface to the module is unchanged or merely augmented the programs which interact with the module need not be changed. If the system can be brought to an orderly halt and if the module does not manage permanent data structures, it will merely be necessary to recompile the modified module, relink the system, stop the old system, and install the new one. If the module does manage permanent data structures which must be modified and the system is one which is expected to continue operation throughout the change, the problem is more difficult, but it can be solved. This paper discusses a solution.
Conference Paper
A new generation of mobile touch devices, such as the iPhone, iPad and Android devices, are equipped with powerful, modern browsers. However, regular websites are not optimized for the specific features and constraints of these devices, such as limited screen estate, unreliable Internet access, touch-based interaction patterns, and features such as GPS. While recent advances in web technology enable web developers to build web applications that take advantage of the unique properties of mobile devices, developing such applications exposes a number of problems, specifically: developers are required to use many loosely coupled languages with limited tool support and application code is often verbose and imperative. We introduce mobl, a new language designed to declaratively construct mobile web applications. Mobl integrates languages for user interface design, styling, data modeling, querying and application logic into a single, unified language that is flexible, expressive, enables early detection of errors, and has good IDE support.
Conference Paper
A dynamic language promotes ease of use through flexible typing, a focus on high-level programming, and by streamlining the edit-compile-debug cycle. Live languages go beyond dynamic languages with more ease of use features. A live language supports live programming that provides programmers with responsive and continuous feedback about how their edits affect program execution. A live language is also based on high-level constructs such as declarative rules so that programmers can write less code. A live language could also provide programmers with responsive semantic feedback to enable time-saving services such as code completion. This paper describes the design of a textual live language that is based on reactive data-flow values known as signals and dynamic inheritance. Our language, SuperGlue, supports live programming with responsive semantic feedback, which we demonstrate with a working prototype.
Conference Paper
Many applications need to respond to incremental modifications to data. Being incremental, such modification often require incremental modifications to the output, making it possible to respond to them asymptotically faster than recomputing from scratch. In many cases, taking advantage of incrementality therefore dramatically improves performance, especially as the input size increases. As a frame of reference, note that in parallel computing speedups are bounded by the number of processors, often a (small) constant. Designing and developing applications that respond to incremental modifications, however, is challenging: it often involves developing highly specific, complex algorithms. Self-adjusting computation offers a linguistic approach to this problem. In self-adjusting computation, programs respond automatically and efficiently to modifications to their data by tracking the dynamic data dependences of the computation and incrementally updating their output as needed. In this invited talk, I present an overview of self-adjusting computation and briefly discuss the progress in developing the approach and present some recent advances.
Immediate-mode graphical user interfaces.textttwww.molly-rocket
  • C Muratori
Khan Academy - computer science
  • J Resig
Light Table - a reactive work surface for programming
  • C Granger
Inventing on principle Invited talk at the Canadian University Software Engineering Conference (CUSEC)
  • B Victor
Self-adjusting computation: (an overview) Proceedings of the 2009 ACM SIGPLAN workshop on Partial evaluation and program manipulation
  • A Umut
  • Acar
TouchDevelop: programming cloud-connected mobile devices via touchscreen, Proceedings of the 10th SIGPLAN symposium on New ideas, new paradigms, and reflections on programming and software
  • Nikolai Tillmann
  • Michal Moskal
  • Jonathan De Halleux
  • Manuel Fahndrich