A J G Hey

University of Southampton, Southampton, England, United Kingdom

Are you A J G Hey?

Claim your profile

Publications (110)179.68 Total impact

  • Oscar Naím, Anthony J. G. Hey
    [Show abstract] [Hide abstract]
    ABSTRACT: Performance visualization is the use of graphical display techniques for the analysis of performance data in order to improve understanding of complex performance phenomena. Performance visualization systems for parallel programs have been helpful in the past and they are commonly used in order to improve parallel program performance. However, despite the advances that have been made in visualizing scientific data, techniques for visualizing performance of parallel programs remain ad hoc and performance visualization becomes more difficult as the parallel system becomes more complex. The use of scientific visualization tools (e.g. AVS, Application Visualization System) to display performance data is becoming a very powerful alternative to support performance analysis of parallel programs. One advantage of this approach is that no tool development is required and that every feature of the data visualization tool can be used for further data analysis. In this paper the Do-Loop-Surface (DLS) display, an abstract view of the performance of a particular do-loop in a program implemented using AVS, is presented as an example on how a data visualization tool can be used to define new abstract representations of performance, helping the user to analyze complex data potentially generated by a large number of processors.
    10/2006: pages 878-887;
  • A. J. G. Hey
    [Show abstract] [Hide abstract]
    ABSTRACT: The paper discusses the parallel programming lessons learnt from the ESPRIT SuperNode project that developed the T800 Transputer. After a brief review of some purportedly portable parallel programming environments, the Genesis parallel benchmarking project is described. The next generation of Transputer components are being developed in the ESPRIT-2 PUMA project and the goals of this project are briefly outlined. The paper closes with some speculations on the possibility of truly general-purpose parallel computing and reviews the work of Valiant.
    01/2006: pages 99-111;
  • [Show abstract] [Hide abstract]
    ABSTRACT: This paper explores methods for extracting parallelism from a wide variety of numerical applications. We investigate communications overheads and load-balancing for networks of transputers. After a discussion of some practical strategies for constructing occam programs, two case studies are analysed in detail.
    01/2006: pages 278-294;
  • A. J. G. Hey, J. Papay, M. Surridge
    [Show abstract] [Hide abstract]
    ABSTRACT: Performance engineering can be described as a collection of techniques and methodologies whose aim is to provide reliable prediction, measurement and validation of the performance of applications on a variety of computing platforms. This paper reviews techniques for performance estimation and performance engineering developed at the University of Southampton and presents application case studies in task scheduling for engineering meta-applications, and capacity engineering for a financial transaction processing system. These show that it is important to describe performance in terms of a resource model, and that the choice of models may have to trade accuracy for utility in addressing the operational issues. We then present work from the on-going EU funded Grid project GRIA, and show how lessons learned from the earlier work have been applied to support a viable business model for Grid service delivery to a specified quality of service level. The key in this case is to accept the limitations of performance estimation methods, and design business models that take these limitations into account rather than attempting to provide hard guarantees over performance. We conclude by identifying some of the key lessons learned in the course of our work over many years and suggest possible directions for future investigations. Copyright © 2005 John Wiley & Sons, Ltd.
    Concurrency and Computation Practice and Experience 02/2005; 17(2‐4):297 - 316. · 0.85 Impact Factor
  • Source
    John R. Gurd, Anthony J. G. Hey, Juri Papay, Graham D. Riley
    Concurrency and Computation Practice and Experience 02/2005; 17:95-98. · 0.85 Impact Factor
  • Source
    Anthony J. G. Hey, Geoffrey Fox
    [Show abstract] [Hide abstract]
    ABSTRACT: This editorial describes four papers that summarize key Grid technology capabilities to support distributed e-Science applications. These papers discuss the Condor system supporting computing communities, the OGSA-DAI service interfaces for databases, the WS-I+ Grid Service profile and finally WS-GAF (the Web Service Grid Application Framework). We discuss the confluence of mainstream IT industry development and the very latest science and computer science research and urge the communities to reach consensus rapidly. Agreement on a set of core Web Service standards is essential to allow developers to build Grids and distributed business and science applications with some assurance that their investment will not be obviated by the changing Web Service frameworks. Copyright © 2005 John Wiley & Sons, Ltd.
    Concurrency and Computation Practice and Experience 02/2005; 17:317-322. · 0.85 Impact Factor
  • Anthony J. G. Hey, Juri Papay, Mike Surridge
    Concurrency - Practice and Experience. 01/2005; 17:297-316.
  • Source
    Concurrency - Practice and Experience. 01/2005; 17:377-389.
  • Source
    Anthony J. G. Hey, Juri Papay, Andy J. Keane, Simon J. Cox
    [Show abstract] [Hide abstract]
    ABSTRACT: The aim of the project described in this paper was to use modern software component technologies such as CORBA, Java and XML for the development of key modules which can be used for the rapid prototyping of application specific Problem Solving Environments (PSE). The software components developed in this project were a user interface, scheduler, monitor, various components for handling interrupts, synchronisation, task execution and software for photonic crystal simulations. The key requirements for the PSE were to provide support for distributed computation in a heterogeneous environment, a user friendly interface for graphical programming, intelligent resource management, object oriented design, a high level of portability and software maintainability, reuse of legacy code and application of middleware technologies in software design.
    Euro-Par 2002, Parallel Processing, 8th International Euro-Par Conference Paderborn, Germany, August 27-30, 2002, Proceedings; 01/2002
  • Source
    Geoffrey Fox, Anthony J. G. Hey
    Concurrency and Computation Practice and Experience 01/2001; 13:1-2. · 0.85 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: This paper defines meta-applications as large, related collections of computational tasks, designed to achieve a specific overall result, running on a (possibly geographically) distributed, non-dedicated meta-computing platform. To carry out such applications in an industrial context, one requires resource management and job scheduling facilities (including capacity planning), to ensure that the application is feasible using the available resources, that each component job will be sent to an appropriate resource, and that everything will finish before the computing resources are needed for other purposes. This requirement has been addressed by the PAC in three major European collaborative projects: PROMENVIR, TOOL- SHED and HPC-VAO, leading to the creation of job scheduling software, in which scheduling is brought together with performance modelling of applications and systems, to provide meta-applications management facilities. This software is described, focusing on the performance modelling approach which was needed to support it. Early results from this approach are discussed, raising some new issues in performance modelling and software deployment for meta-applications. An indication is given about ongoing work at the PAC designed to overcome current limitations and address these outstanding issues. ©1999 Published by Elsevier Science B.V. All rights reserved.
    Future Generation Computer Systems 10/1999; 15:723-734. · 2.64 Impact Factor
  • Conference Paper: White-Box Benchmarking.
    Emilio Hernández, Anthony J. G. Hey
    [Show abstract] [Hide abstract]
    ABSTRACT: Structural performance analysis of the NAS parallel benchmarks is used to time code sections and specific classes of activity, such as communication or data movements. This technique is called whitebox benchmarking because, similarly to white-box methodologies used in program testing, the programs are not treated as black boxes. The timing methodology is portable, which is indispensable to make comparative benchmarking across different computer systems. A combination of conditional compilation and code instrumentation is used to measure execution time related to different aspects of application performance. This benchmarking methodology is proposed to help understand parallel application behaviour on distributed-memory parallel platforms.
    Euro-Par '98 Parallel Processing, 4th International Euro-Par Conference, Southampton, UK, September 1-4, 1998, Proceedings; 01/1998
  • [Show abstract] [Hide abstract]
    ABSTRACT: This paper is concerned with the use of Massively Parallel Processing (MPP) systems by industry and commerce. In this context, it is argued that the definition of MPP should be extended to include LAN/WAN clusters or `meta-computers'. The frontier for research for industry has moved on from mere parallel implementations of scientific simulations or commercial databases-rather, it is concerned with the problem of integrating computational and informational resources in a seamless and effective manner. Examples taken from recent research projects at the Parallel Applications Centre (PAC) are used to illustrate these points
    Massively Parallel Programming Models, 1997. Proceedings. Third Working Conference on; 12/1997
  • Source
    Mark Papiani, Alistair N. Dunlop, Anthony J. G. Hey
    [Show abstract] [Hide abstract]
    ABSTRACT: Reformatting information currently held in databases into HyperText Markup Language (HTML) pages suitable for the World-Wide Web (WWW) requires significant effort both in creating the pages initially and their subsequent maintenance. We avoid these costs by directly coupling a WWW server to the source data within a database using additional software we have developed. This software layer automatically generates the WWW interface to the database using meta-data from the catalogue. The resulting interface allows either direct entry of SQL queries or an intuitive graphical means of specifying queries. Query results are returned as dynamic HTML pages. Browsing of the database is made possible by creating dynamic HyperText links that are included automatically within the query results. These links are derived from referential integrity constraints defined in the meta-data. 1 Introduction Where is the knowledge we have lost in information? - T.S. Eliot Internet computing is now regarded as...
    05/1997;
  • Source
    A.J.G. Hey
    [Show abstract] [Hide abstract]
    ABSTRACT: In the desktop networked PC age, does high performance computing have a future? Drawing on the history of the computer and the present state of the art, the article concludes that HPC is alive and well, with healthy prospects for many years to come
    Computing and Control Engineering 03/1997; · 0.16 Impact Factor
  • Mark Papiani, Alistair N. Dunlop, Anthony J. G. Hey
    [Show abstract] [Hide abstract]
    ABSTRACT: Without Abstract
    Advances in Databases, 15th British National Conferenc on Databases, BNCOD 15, London, United Kingdom, July 7-9, 1997, Proceedings; 01/1997
  • Oscar Naim, Anthony J. G. Hey
    High-Performance Computing and Networking, International Conference and Exhibition, HPCN Europe 1997, Vienna, Austria, April 28-30, 1997, Proceedings; 01/1997
  • Vladimir Getov, Emilio Hernández, Anthony J. G. Hey
    Euro-Par '97 Parallel Processing, Third International Euro-Par Conference, Passau, Germany, August 26-29, 1997, Proceedings; 01/1997
  • [Show abstract] [Hide abstract]
    ABSTRACT: PALLAS is an independent German software company specializing in High Performance Computing. Apart from consulting and training services, a programming environment for developing, porting and tuning of parallel applications is available. It consists of VAMPIR: versatile performance analysis of MPI programs TotalView: parallel debugger for MPI/PARMACS/PVM programs HPF: High Performance Fortran from The Portland Group and provides quality and functionality across a wide range of parallel platforms from workstations to MPP systems.
    01/1996;
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: The Genesis benchmark suite has been assembled to evaluate the performance of distributed-memory MIMD systems. The problems selected all have a scientific origin (mostly from physics or theoretical chemistry), and range from synthetic code fragments designed to measure the basic hardware properties of the computer (especially communication and synchronisation overheads), through commonly used library subroutines, to full application codes. This is the second of a series of papers on the Genesis distributed-memory benchmarks, which were developed under the European ESPRIT research program. Results are presented for the SUPRENUM and iPSC/860 computers when running the following benchmarks: COMMS1 (communications), TRANS1 (matrix transpose), FFT1 (fast Fourier transform) and QCD2 (conjugate gradient kernel). The theoretical predictions are compared with, or fitted to, the measured results, and then used to predict (with due caution) how the performance might scale for larger problems and more processors than were actually available during the benchmarking.
    Concurrency Practice and Experience 08/1995; 7(6):543 - 570.

Publication Stats

1k Citations
179.68 Total Impact Points

Institutions

  • 1976–2006
    • University of Southampton
      • • Faculty of Physical and Applied Sciences
      • • Department of Electronics and Computer Science (ECS)
      Southampton, England, United Kingdom
  • 2002–2005
    • Engineering and Physical Sciences Research Council
      Swindon, England, United Kingdom
  • 1971–1987
    • California Institute of Technology
      Pasadena, California, United States
  • 1982
    • Washington University in St. Louis
      • Department of Physics
      Saint Louis, MO, United States
  • 1973–1978
    • CERN
      Genève, Geneva, Switzerland
  • 1975–1977
    • University of Oxford
      • Department of Physics
      Oxford, England, United Kingdom