ArticlePDF Available

Performance Impact of Object Oriented Programming


Abstract and Figures

It is widely accepted that object-oriented design improves code reusability, facilitates code maintainability and enables higher levels of abstraction. Although the software engineering community has embraced object-oriented programming for these benefits, it has not been clear what performance overheads are associated with this programming paradigm. In this paper, we present some quantitative results based on the performance of a few programs in C and C++. Several programs were profiled and the statistics of several program executions at various compiler optimization levels were generated on two architectures, the MIPS and SPARC. One observation was that in spite of a static code increase in C++, the dynamic instruction counts were either comparable or smaller in C++. However the cache miss ratios and traffic ratios were significantly worse for C++ (often twice). It was also seen that some of the C++ features such as function overloading and free unions did not incur any run time over...
Content may be subject to copyright.
ResearchGate has not been able to resolve any citations for this publication.
Conference Paper
Full-text available
Previous studies have shown that object-oriented programs have different execution characteristics than procedural programs, and that special object-oriented hardware can improve performance. The results of these studies may no longer hold because compiler optimizations can remove a large fraction of the differences. Our measurements show that SELF programs are more similar to C programs than are C++ programs, even though SELF is much more radically object-oriented than C++ and thus should differ much more from C.Furthermore, the benefit of tagged arithmetic instructions in the SPARC architecture (originally motivated by Smalltalk and Lisp implementations) appears to be small. Also, special hardware could hardly reduce message dispatch overhead since dispatch sequences are already very short. Two generic hardware features, instruction cache size and data cache write policy, have a much greater impact on performance.
ATOM (Analysis Tools with OM) is a single framework for building a wide range of customized program analysis tools. It provides the common infrastructure present in all code-instrumenting tools; this is the difficult and time-consuming part. The user simply defines the tool-specific details in instrumentation and analysis routines. Building a basic block counting tool like Pixie with ATOM requires only a page of code. ATOM, using OM link-time technology, organizes the final executable such that the application program and user's analysis routines run in the same address space. Information is directly passed from the application program to the analysis routines through simple procedure calls instead of inter-process communication or files on disk. ATOM takes care that analysis routines do not interfere with the program's execution, and precise information about the program is presented to the analysis routines at all times. ATOM uses no simulation or interpretation. ATOM has been implemented on the Alpha AXP under OSF/1. It is efficient and has been used to build a diverse set of tools for basic block counting, profiling, dynamic memory recording, instruction and data cache simulation, pipeline simulation, evaluating branch prediction, and instruction scheduling.
An abstract is not available.
Since the inception of von-Neumann architecture for computer design, there has been no new paradigms or revolutions in computer architectures. Computer applications have been increasing at an exponential rate, however, the basic computer architectures remained the same. The conventional computer architectures, which are based on primitive building blocks including arithmetic logic units, floating point processor units, logical shift units, and register file units created tremendous semantic-gap and inefficiencies in information system processing. It is about time to revisit the standard von-Neumann computation model and argue about its efficiencies, as we are entering into a new era of information processing where applications don't have any boundaries in computation, communication, and information storage. In this paper, we propose a revolutionary computer architecture which avoids the semantic-gap and inefficiencies, and is based on an object-oriented paradigm to provide the benefits of abstraction, inheritance, hierarchy, modularity, extensibility, and polymorphism. We will describe the fundamental building blocks for this architecture and propose a possible approach for implementing these new generation of computers which will not make software and hardware obsolete before coming to existence. We will present the design issues related to such architectures and research directions needed to study the feasibility of these architectures.
In An Introduction to Object-Oriented Programming, Timothy Budd provides a language-independent presentation of object-oriented principles, such as objects, methods, inheritance (including multiple inheritance) and polymorphism. Examples are drawn from several different languages, including (among others) C++, C#, Java, CLOS, Delphi, Eiffel, Objective-C and Smalltalk. By examining many languages, the reader is better able to appreciate the general principles that lie beyond the syntax of the individual languages.
The state of object-oriented is evolving rapidly. This survey describes what are currently thought to be the key ideas. Although it is necessarily incomplete, it contains both academic and industrial efforts and describes work in both the United States and Europe. It ignores well-known ideas, like that of Coad and Meyer [34], in favor of less widely known projects. Research in object-oriented design can be divided many ways. Some research is focused on describing a design process. Some is focused on finding rules for good designs. A third approach is to build tools to support design. Most of the research described in this article does all three. We first present work from Alan Snyder at Hewlett-Packard on developing a common framework for object-oriented terminology. The goal of this effort is to develop and communicate a corporate-wide common language for specifying and communicating about objects. We next look into the research activity at Hewlett-Packard, led by Dennis de Champeaux. De Champeaux is developing a model for object-based analysis. His current research focuses on the use of a trigger-based model for inter-object communications and development of a top-down approach to analysis using ensembles. We then survey two research activities that prescribe the design process. Rebecca Wirfs-Brock from Tektronix has been developing an object-oriented design method that focuses on object responsibilities and collaborations. The method includes graphical tools for improving encapsulation and understanding patterns of object communication. Trygve Reenskaug at the Center for Industriforskning in Oslo, Norway has been developing an object-oriented design method that focuses on roles, synthesis, and structuring. The method, called Object-Oriented Role Analysis, Syntheses and Structuring, is based on first modeling small sub-problems, and then combining small models into larger ones in a controlled manner using both inheritance (synthesis) and run-time binding (structuring). We then present investigations by Ralph Johnson at the University of Illinois at Urbana-Champaign into object-oriented frameworks and the reuse of large-scale designs. A framework is a high-level design or application architecture and consists of a suite of classes that are specifically designed to be refined and used as a group. Past work has focused on describing frameworks and how they are developed. Current work includes the design of tools to make it easier to design frameworks. Finally, we present some results from the research group in object-oriented software engineering at Northeastern University, led by Karl Lieberherr. They have been working on object-oriented Computer Assisted Software Engineering (CASE) technology, called the Demeterm system, which generates language-specific class definitions from language-independent class dictionaries. The Demeter system include tools for checking design rules and for implementing a design.
Object-oriented programming is one of today's buzzwords. On the one hand it is a programming paradigm of its own right. On the other hand it is a set of software engineering tools to build more reliable and reusable systems. One other kind of programming style, which has already shown its power on this field is structured programming. In this paper we look at the relationship between structured programming and object-oriente d programming. We give a definition of structured programming and check the object-oriented features encapsulation, inheritance and messages for compatibility. Two problems arise. The first addresse s interfaces and inheritance, the second polymorphism and dynamic binding. We do not end with an answer like yes or no, but try to give a basis for further discussions. This may help to classify object-oriented programming into a wide range of tools for compute r programming and computer science. It may also help to abandon the hypothesis that object-oriente d programming is the solution to all the problems.