A J G Hey

University of Southampton, Southampton, England, United Kingdom

Are you A J G Hey?

Claim your profile

Publications (113)235.85 Total impact

  • Oscar Naím · Anthony J. G. Hey
    [Show abstract] [Hide abstract]
    ABSTRACT: Performance visualization is the use of graphical display techniques for the analysis of performance data in order to improve understanding of complex performance phenomena. Performance visualization systems for parallel programs have been helpful in the past and they are commonly used in order to improve parallel program performance. However, despite the advances that have been made in visualizing scientific data, techniques for visualizing performance of parallel programs remain ad hoc and performance visualization becomes more difficult as the parallel system becomes more complex. The use of scientific visualization tools (e.g. AVS, Application Visualization System) to display performance data is becoming a very powerful alternative to support performance analysis of parallel programs. One advantage of this approach is that no tool development is required and that every feature of the data visualization tool can be used for further data analysis. In this paper the Do-Loop-Surface (DLS) display, an abstract view of the performance of a particular do-loop in a program implemented using AVS, is presented as an example on how a data visualization tool can be used to define new abstract representations of performance, helping the user to analyze complex data potentially generated by a large number of processors.
    No preview · Chapter · Oct 2006
  • [Show abstract] [Hide abstract]
    ABSTRACT: This paper explores methods for extracting parallelism from a wide variety of numerical applications. We investigate communications overheads and load-balancing for networks of transputers. After a discussion of some practical strategies for constructing occam programs, two case studies are analysed in detail.
    No preview · Chapter · Jan 2006
  • A. J. G. Hey
    [Show abstract] [Hide abstract]
    ABSTRACT: The paper discusses the parallel programming lessons learnt from the ESPRIT SuperNode project that developed the T800 Transputer. After a brief review of some purportedly portable parallel programming environments, the Genesis parallel benchmarking project is described. The next generation of Transputer components are being developed in the ESPRIT-2 PUMA project and the goals of this project are briefly outlined. The paper closes with some speculations on the possibility of truly general-purpose parallel computing and reviews the work of Valiant.
    No preview · Chapter · Jan 2006
  • Source
    Anthony J. G. Hey · Geoffrey Fox
    [Show abstract] [Hide abstract]
    ABSTRACT: This editorial describes four papers that summarize key Grid technology capabilities to support distributed e-Science applications. These papers discuss the Condor system supporting computing communities, the OGSA-DAI service interfaces for databases, the WS-I+ Grid Service profile and finally WS-GAF (the Web Service Grid Application Framework). We discuss the confluence of mainstream IT industry development and the very latest science and computer science research and urge the communities to reach consensus rapidly. Agreement on a set of core Web Service standards is essential to allow developers to build Grids and distributed business and science applications with some assurance that their investment will not be obviated by the changing Web Service frameworks. Copyright © 2005 John Wiley & Sons, Ltd.
    Full-text · Article · Feb 2005 · Concurrency and Computation Practice and Experience
  • Source
    John R. Gurd · Anthony J. G. Hey · Juri Papay · Graham D. Riley

    Full-text · Article · Feb 2005 · Concurrency and Computation Practice and Experience
  • A. J. G. Hey · J. Papay · M. Surridge
    [Show abstract] [Hide abstract]
    ABSTRACT: Performance engineering can be described as a collection of techniques and methodologies whose aim is to provide reliable prediction, measurement and validation of the performance of applications on a variety of computing platforms. This paper reviews techniques for performance estimation and performance engineering developed at the University of Southampton and presents application case studies in task scheduling for engineering meta-applications, and capacity engineering for a financial transaction processing system. These show that it is important to describe performance in terms of a resource model, and that the choice of models may have to trade accuracy for utility in addressing the operational issues. We then present work from the on-going EU funded Grid project GRIA, and show how lessons learned from the earlier work have been applied to support a viable business model for Grid service delivery to a specified quality of service level. The key in this case is to accept the limitations of performance estimation methods, and design business models that take these limitations into account rather than attempting to provide hard guarantees over performance. We conclude by identifying some of the key lessons learned in the course of our work over many years and suggest possible directions for future investigations. Copyright © 2005 John Wiley & Sons, Ltd.
    No preview · Article · Feb 2005 · Concurrency and Computation Practice and Experience
  • Source

    Full-text · Article · Jan 2005
  • Anthony J. G. Hey · Juri Papay · Mike Surridge

    No preview · Article · Jan 2005
  • Source
    Anthony J. G. Hey · Juri Papay · Andy J. Keane · Simon J. Cox
    [Show abstract] [Hide abstract]
    ABSTRACT: The aim of the project described in this paper was to use modern software component technologies such as CORBA, Java and XML for the development of key modules which can be used for the rapid prototyping of application specific Problem Solving Environments (PSE). The software components developed in this project were a user interface, scheduler, monitor, various components for handling interrupts, synchronisation, task execution and software for photonic crystal simulations. The key requirements for the PSE were to provide support for distributed computation in a heterogeneous environment, a user friendly interface for graphical programming, intelligent resource management, object oriented design, a high level of portability and software maintainability, reuse of legacy code and application of middleware technologies in software design.
    Full-text · Conference Paper · Aug 2002
  • Source
    Geoffrey C. Fox · Anthony J. G. Hey

    Full-text · Article · Jan 2001 · Concurrency and Computation Practice and Experience
  • Source
    Nick Floros · Anthony J. G. Hey · K. E. Meacham · Juri Papay · Mike Surridge
    [Show abstract] [Hide abstract]
    ABSTRACT: This paper defines meta-applications as large, related collections of computational tasks, designed to achieve a specific overall result, running on a (possibly geographically) distributed, non-dedicated meta-computing platform. To carry out such applications in an industrial context, one requires resource management and job scheduling facilities (including capacity planning), to ensure that the application is feasible using the available resources, that each component job will be sent to an appropriate resource, and that everything will finish before the computing resources are needed for other purposes. This requirement has been addressed by the PAC in three major European collaborative projects: PROMENVIR, TOOL- SHED and HPC-VAO, leading to the creation of job scheduling software, in which scheduling is brought together with performance modelling of applications and systems, to provide meta-applications management facilities. This software is described, focusing on the performance modelling approach which was needed to support it. Early results from this approach are discussed, raising some new issues in performance modelling and software deployment for meta-applications. An indication is given about ongoing work at the PAC designed to overcome current limitations and address these outstanding issues. ©1999 Published by Elsevier Science B.V. All rights reserved.
    Preview · Article · Oct 1999 · Future Generation Computer Systems
  • Conference Paper: White-Box Benchmarking.
    Emilio Hernández · Anthony J. G. Hey
    [Show abstract] [Hide abstract]
    ABSTRACT: Structural performance analysis of the NAS parallel benchmarks is used to time code sections and specific classes of activity, such as communication or data movements. This technique is called whitebox benchmarking because, similarly to white-box methodologies used in program testing, the programs are not treated as black boxes. The timing methodology is portable, which is indispensable to make comparative benchmarking across different computer systems. A combination of conditional compilation and code instrumentation is used to measure execution time related to different aspects of application performance. This benchmarking methodology is proposed to help understand parallel application behaviour on distributed-memory parallel platforms.
    No preview · Conference Paper · Jan 1998
  • A.J.G. Hey · C.J. Scott · M. Surridge · C. Upstill
    [Show abstract] [Hide abstract]
    ABSTRACT: This paper is concerned with the use of Massively Parallel Processing (MPP) systems by industry and commerce. In this context, it is argued that the definition of MPP should be extended to include LAN/WAN clusters or `meta-computers'. The frontier for research for industry has moved on from mere parallel implementations of scientific simulations or commercial databases-rather, it is concerned with the problem of integrating computational and informational resources in a seamless and effective manner. Examples taken from recent research projects at the Parallel Applications Centre (PAC) are used to illustrate these points
    No preview · Conference Paper · Dec 1997
  • Source
    Mark Papiani · Alistair N. Dunlop · Anthony J. G. Hey
    [Show abstract] [Hide abstract]
    ABSTRACT: Reformatting information currently held in databases into HyperText Markup Language (HTML) pages suitable for the World-Wide Web (WWW) requires significant effort both in creating the pages initially and their subsequent maintenance. We avoid these costs by directly coupling a WWW server to the source data within a database using additional software we have developed. This software layer automatically generates the WWW interface to the database using meta-data from the catalogue. The resulting interface allows either direct entry of SQL queries or an intuitive graphical means of specifying queries. Query results are returned as dynamic HTML pages. Browsing of the database is made possible by creating dynamic HyperText links that are included automatically within the query results. These links are derived from referential integrity constraints defined in the meta-data. 1 Introduction Where is the knowledge we have lost in information? - T.S. Eliot Internet computing is now regarded as...
    Preview · Article · May 1997
  • Source
    A.J.G. Hey
    [Show abstract] [Hide abstract]
    ABSTRACT: In the desktop networked PC age, does high performance computing have a future? Drawing on the history of the computer and the present state of the art, the article concludes that HPC is alive and well, with healthy prospects for many years to come
    Preview · Article · Mar 1997 · Computing and Control Engineering
  • Oscar Naim · Anthony J. G. Hey

    No preview · Conference Paper · Jan 1997
  • Vladimir Getov · Emilio Hernández · Anthony J. G. Hey

    No preview · Conference Paper · Jan 1997
  • Mark Papiani · Alistair N. Dunlop · Anthony J. G. Hey
    [Show abstract] [Hide abstract]
    ABSTRACT: Without Abstract
    No preview · Conference Paper · Jan 1997
  • [Show abstract] [Hide abstract]
    ABSTRACT: PALLAS is an independent German software company specializing in High Performance Computing. Apart from consulting and training services, a programming environment for developing, porting and tuning of parallel applications is available. It consists of VAMPIR: versatile performance analysis of MPI programs TotalView: parallel debugger for MPI/PARMACS/PVM programs HPF: High Performance Fortran from The Portland Group and provides quality and functionality across a wide range of parallel platforms from workstations to MPP systems.
    No preview · Article · Jan 1996
  • Source
    Mark Papiani · Anthony J. G. Hey · Roger W. Hockney
    [Show abstract] [Hide abstract]
    ABSTRACT: Unlikesingle-processorbenchmarks,multi-processorbenchmarkscanyieldtensofnumbersforeach benchmark on each computer, as factors such as the number of processors and problem size are varied. A graphical display of performance surfaces therefore provides a satisfactory way of comparing results. The University of Southampton has developed the Graphical Benchmark Information Service (GBIS) on the World Wide Web (WWW) to display interactively graphs of user selected benchmark results from the GENESIS and PARKBENCH benchmark suites.
    Preview · Article · Dec 1995 · Scientific Programming

Publication Stats

1k Citations
235.85 Total Impact Points


  • 1976-2006
    • University of Southampton
      • • Department of Electronics and Computer Science (ECS)
      • • Department of Mathematics
      Southampton, England, United Kingdom
  • 2002-2005
    • Engineering and Physical Sciences Research Council
      Swindon, England, United Kingdom
  • 1979
    • Massachusetts Institute of Technology
      Cambridge, Massachusetts, United States
  • 1977
    • The University of Edinburgh
      Edinburgh, Scotland, United Kingdom
  • 1973
    • CERN
      Genève, Geneva, Switzerland
  • 1971-1973
    • California Institute of Technology
      Pasadena, California, United States
    • University of Oxford
      Oxford, England, United Kingdom