Conference Paper

Understanding memory allocation of scheme programs.

DOI: 10.1145/357766.351264 Conference: Proceedings of the fifth ACM SIGPLAN international conference on Functional programming, Volume: 35
Source: DBLP

ABSTRACT Memory is the performance bottleneck of modern architectures. Keeping memory consumption as low as possible enables fast and unobtrusive applications. But it is not easy to estimate the memory use of programs implemented in functional languages, due to both the complex translations of some high level constructs, and the use of automatic memory managers.To help understand memory allocation behavior of Scheme programs, we have designed two complementary tools. The first one reports on frequency of allocation, heap configurations and on memory reclamation. The second tracks down memory leaks1. We have applied these tools to our Scheme compiler, the largest Scheme program we have been developing. This has allowed us to drastically reduce the amount of memory consumed during its bootstrap process, without requiring much development time.Development tools will be neglected unless they are both conveniently accessible and easy to use. In order to avoid this pitfall, we have carefully designed the user interface of these two tools. Their integration into a real programming environment for Scheme is detailed in the paper.

  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: The amount of data stored in data warehouses grows very quickly so that they can get saturated. To overcome this problem, we propose a language for specifying forgetting functions on stored data. In order to preserve the possibility of performing interesting analyses of historical data, the specifications include the definition of some summaries of deleted data. These summaries are aggregates and samples of deleted data and will be kept in the data warehouse. Once forgetting functions have been specified, the data warehouse is automatically updated in order to follow the specifications. This paper presents both the language for specifications, the structure of the summaries and the algorithms to update the data warehouse.
    Research, Innovation and Vision for the Future, 2007 IEEE International Conference on; 04/2007
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Scheme uses garbage collection for heap memory management. Ideally, garbage collectors should be able to reclaim all dead objects, i.e. objects that will not be used in future. However, garbage collectors collect only those dead objects that are not reachable from any program variable. Dead objects that are reachable from program variables are not reclaimed. In this paper we describe our experiments to measure the effectiveness of garbage collection in MIT/GNU Scheme. We compute the drag time of objects, i.e. the time for which an object remains in heap memory after its last use. The number of dead objects and the drag time together indicate opportunities for improving garbage collection. Our experiments reveal that up to 26% of dead objects remain in memory. The average drag time is up to 37% of execution time. Overall, we observe memory saving potential ranging from 9% to 65%.
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Advances in parallel computation are of central importance to Artificial Intelligence due to the significant amount of time and space their pro- grams require. Functional languages have been identified as providing a clear and concise way of programming parallel machines for artificial intelligence tasks. The problems of exporting, creating, and manipulating processes have been thoroughly studied in relation to the paralleliza- tion of functional languages, but none of the necessary support structures needed for the ab- straction, like a distributed memory, have been properly designed. In order to design and im- plement parallel functional languages efficiently, we propose the development of an all-software based distributed virtual memory system de- signed specifically for the memory demands of a functional language. In this paper, we review the MT architecture and briefly survey the related literature that lead to its development. We then present empirical results obtained from observ- ing the paging behavior of the MT stack. Our empirical results suggest that LRU is superior to FIFO as a page replacement policy for MT stack pages. We present a proof that LRU is an opti-


Available from