Anthony J. G. Hey

California Institute of Technology, Pasadena, California, United States

Are you Anthony J. G. Hey?

Claim your profile

Publications (10)4.33 Total impact

  • Anthony J. G. Hey, Geoffrey Fox
    [Show abstract] [Hide abstract]
    ABSTRACT: This editorial describes four papers that summarize key Grid technology capabilities to support distributed e-Science applications. These papers discuss the Condor system supporting computing communities, the OGSA-DAI service interfaces for databases, the WS-I+ Grid Service profile and finally WS-GAF (the Web Service Grid Application Framework). We discuss the confluence of mainstream IT industry development and the very latest science and computer science research and urge the communities to reach consensus rapidly. Agreement on a set of core Web Service standards is essential to allow developers to build Grids and distributed business and science applications with some assurance that their investment will not be obviated by the changing Web Service frameworks. Copyright © 2005 John Wiley & Sons, Ltd.
    Concurrency - Practice and Experience. 01/2005; 17:317-322.
  • Source
    Concurrency - Practice and Experience. 01/2005; 17:377-389.
  • Geoffrey Fox, Anthony J. G. Hey
    Concurrency and Computation: Practice and Experience. 01/2001; 13:1-2.
  • Geoffrey Fox, Steve W. Otto, Anthony J. G. Hey
    [Show abstract] [Hide abstract]
    ABSTRACT: We discuss algorithms for matrix multiplication on a concurrent processor containing a two-dimensional mesh or richer topology. We present detailed performance measurements on hypercubes with 4, 16, and 64 nodes, and analyze them in terms of communication overhead and load balancing. We show that the decomposition into square subblocks is optimal C code implementing the algorithms is available.
    Parallel Computing. 01/1987; 4:17-31.
  • G.C. Fox, A.J.G. Hey
    [Show abstract] [Hide abstract]
    ABSTRACT: We study the peripheral cross sections of resonances that cannot be produced by π-exchange. In particular, we concentrate on the four meson nonets expected as L = 1 quark states (i.e., the JP = 0+πN(980); JP = 1+ A1, B; JP = 2+ A2). We use SU(3), Regge poles, factorization, exchange degeneracy, pole extrapolation, and the vector-meson-photon analogy. We predict the cross sections in both photoproduction and non-diffractive hadronic reactions. In passing, we discuss the large unnatural-parity (B, K−QB) exchange contributions and even the possibility of studying ππ → πω while avoiding the B production background.
    Nuclear Physics B 01/1973; · 4.33 Impact Factor
  • C. B. Chiu, G. C. Fox, Anthony J. G. Hey
    [Show abstract] [Hide abstract]
    ABSTRACT: What is phenomenology? Reach not for your dictionary; make no vain efforts to pronounce it; we will come clean and explain all. Science is noted for a competitive and helpful interaction between theorists and experimentalists. Unfortunately in almost all developing sciences, the moving hand of time drives a widening wedge between theory and experiment. Thus theorists are fully occupied in the mathematical and philosophical intricacies of their latest ideas. Again, experimentalists must concentrate on the design of their apparatus to insure they will get the best possible results current technology will allow. Phenomenology seeks to close the gap between those once close friends, theory and experiment, and so restore the interaction which is both vital to and characteristic of science. Although a classical concept, phenomenology is best known in its second-quantized form. The basic tool of the phenomenologist is, first, the construction of simple models that embody important theoretical ideas, and then, the critical comparison of these models with all relevant experimental data. It follows that a phenomenologist must combine a broad understanding of theory with a complete knowledge of current and future feasible experiments in order to allow him to interact meaningfully with both major branches of a science. The impact of phenomenology is felt in both theory and experiment. Thus it can pinpoint unexpected experimental observations and so delineate areas where new theoretical ideas are needed. Further, it can suggest the most useful experiments to be done to test the latest theories. This is especially important in these barren days where funds are limited, experiments take many "physicist-years" to complete, and theories are multitudinous and complicated. Phenomenology is applicable in many sciences but this conference was organized with the hope of emphasizing the wide scope and importance of phenomenology in particle physics. In fact, in the time available, not even all the important applications to particle physics could be covered. Some of these omissions were repaired in a workshop, held at Caltech just after the main conference reported here, and devoted to physics at intermediate energies (~< 5 GeV). This area is particularly suitable for phenomenology as the qualitative features have been well explored and further progress demands difficult experiments with high statistics. Phenomenology can indicate, for instance, which of the some hundred (quasi) two body reactions will be most fruitful to study. In the following we map some of the more active fields of phenomenology indicating where they have been covered in either the present volume, our companion workshop, or elsewhere. The contents of the current volume are summarized in more detail in the abstracts of the invited papers which have been collected together in pages xi to xvi. We are indebted to many people for making this conference possible: Professor R.B. Leighton for his generous sponsorship; Nancy Hopkins and James Black of the Caltech Alumni Office for their efficient and cheerful organization; the session chairmen, M. Gell-Mann, W. Selove, J.D. Bjorken, M.J. Moravcsik, J.D. Jackson, T. Ferbel, R.L. Walker and S.C. Frautschi, for the smooth running of the conference; Susan Berger for her delightful cover; and our secretaries for their careful typing, with an especial thank you to Chris St.Clair who also drew the amusing illustrations. Alvin Tollestrup originally had the good idea of holding a phenomenology conference: We are grateful to him and our colleagues at Caltech for the encouragement which has made the organization and editing of this conference so enjoyable.
  • Geoffrey C Fox, Anthony J G Hey
    [Show abstract] [Hide abstract]
    ABSTRACT: Sometimes the Grid is called the next-generation Web. The Web makes information available in a transparent and user-friendly way. On the other hand the grid goes one step further in that it enables members of a dynamic, multi-institutional virtual organisation to share distributed computing resources to solve an agreed set of problems in a managed and coordinated fashion. With the grid, users should be unaware whether they are using the computer or data on their own desktop or any other computer or resource connected to the international network. Users get the resources they need, anytime, and from anywhere, with the complexity of the grid infrastructure being hidden from them. The technology needed to implement the grid includes new protocols, services, and APIs for secure resource access, resource management, fault detection, and communication. Moreover, one introduces application concepts such as virtual data, smart instruments, collaborative design spaces, and meta-computations. All over the world national and international grid initiatives have been funded. In high-energy physics recently the first phase of the LHC Computing Grid Project has been set up. Its role is to prepare, coordinate, and manage the international infrastructure needed to share and handle on the grid the unprecedented amount of data (several peta-bytes per year) that the LHC experiments will generate starting around 2007. Architectures and resources have to be defined to fulfil the needs of the various participating scientific and engineering communities of over 6000 physicists and engineers coming from more than fifty different countries in Europe, the Americas, Asia and elsewhere. Experience and know-how has to build up in the area of linking tens of thousands of commodity components combined into tiers of variant complexity (from tens of thousands to a few tens of nodes linked to the Grid). These managed components include CPU, disk, network switches, massive mass storage, plus the needed manpower and other resources to make the whole setup function. Issues of scale, efficiency and performance, resilience, fault tolerance, total cost (acquisition, maintenance, operation), usability, and security have to be taken into account. Our speakers will address these issues in a set of five lectures, going from an initial framing overview on e-Science and Grids in general; elaborating on the underlying Grid Service architecture, OGSA and Web Services. To end with a forecast of likely futures for the Grid.
  • Charles Bin Chiu, Geoffrey C Fox, Anthony J G Hey
  • Source
    F. Berman, G. Fox, A. J. G. Hey
  • Article: dfgdfgdg
    F. Berman, G. Fox, A. J. G. Hey