Douglass E. PostCarnegie Mellon University | CMU · Software Engineering Institute
Douglass E. Post
PhD
About
259
Publications
20,610
Reads
How we measure 'reads'
A 'read' is counted each time someone views a publication summary (such as the title, abstract, and list of authors), clicks on a figure, or views or downloads the full-text. Learn more
6,446
Citations
Introduction
Additional affiliations
June 2005 - present
Independent Researcher
Position
- Chief Scientist and CREATE Program Manager
March 2001 - June 2005
March 1998 - March 2001
Education
September 1968 - November 1974
Publications
Publications (259)
On the basis of an analysis of the ITER L-mode energy confinement database, two new scaling expressions for tokamak L-mode energy confinement are proposed, namely a power law scaling and an offset-linear scaling. The analysis indicates that the present multiplicity of scaling expressions for the energy confinement time TE in tokamaks (Goldston, Kay...
Recent events in the US defense community hold the promise of a major boost in the use of agile software development methods and a concomitant improvement in the effectiveness of defense software procurement and development.
The USA faces a multitude of threats to its national security and international interests in an era of exponential technology growth and unprecedented access by anyone with a smartphone. Traditionally, the acquisition of US defense systems has relied on sequential methods of conceptual design and development. While successful in the past, these met...
With an annual budget of nearly US600 billion, the US Department of Defense (DoD) is tasked with protecting the US and its allies and interests abroad against potential adversaries. It must accomplish these tasks in a globalized and highly interconnected world where the pace of technology is moving faster than the current time-consuming DoD acquisi...
The goal of the CREATE program is to develop and deploy physics-based computational engineering tools that can be used to develop virtual prototypes of ships, air vehicles, ground vehicles, and radio frequency antennas to accurately predict their performance in support of the US Department of Defense acquisition process, DoD 5000. The purpose of th...
Silicon Valley is the home of many of the most innovative high-technology industries that have ever existed. Their level of innovation gives them a competitive economic advantage that sustains a significant portion of the US economy. While there have been dozens of attempts in the US and abroad to replicate this success, few have been very successf...
The guest editors of this special issue describe five articles that make up the second part of a series describing US Department of Defense software engineering efforts.
The January/February 2016 issue of this magazine presented descriptions of the US Defense Department's Computational Research and Engineering Acquisition Tools and Environments (CREATE) program and the software engineering approach for managing its programmatic risks. This article describes the software engineering methodology deployed to manage th...
Associate editor in chief Doug Post describes how the periodic table of elements is an example of big data as we know it today.
Today, rapid product innovation is essential to remain competitive. To help spur innovation in the acquisition of major defense systems and reduce their cost, time, and risks, the US Department of Defense launched the Computational Research and Engineering Acquisition Tools and Environments (CREATE) program in 2006 to develop and deploy physics-bas...
Physics-based high-performance computing (HPC) engineering software applications are proving to highly effective for the development of complex innovative products such as automobiles, airplanes, and microprocessors. Over the next year and a half, CiSE will feature three issues describing the US Department of Defense (DoD) High Performance Computin...
The high level of technological innovation required for a strong national economy and defense is only achievable with a highly productive engineering workforce. AEIC Douglass Post and consultant Richard Kendall discuss how virtual tools and processes can help.
The Department of Defense's High Performance Computing Modernization Program Computational Research and Engineering Acquisition Tools and Environments Computational Research and Engineering Acquisition Tools and Environments (CREATE) program is developing and deploying multiphysics high-performance computing software applications for engineers to d...
Here, Douglass Post discusses how the use of virtual prototypes analyzed with physics-based performance prediction tools is a potential game changer for product development.
The guest editors discuss some recent advances in exascale computing, as well as remaining issues.
The need for long-term energy sources, in particular for our highly technological society, has become increasingly apparent during the last decade. One of these sources, of tremendous poten tial importance, is controlled thermonuclear fusion. The goal of controlled thermonuclear fusion research is to produce a high-temperature, completely ionized...
A new study explains why the days of obtaining performance increases due to higher processor speed are mostly over, and where we go from here.
A multi-regime model of radial particle and energy transport in tokamaks, based on the predictions of low-frequency microinstability theory, has been incorporated into a one-dimensional computer code. The code also includes background neutral gas, neoclassically (3-regime) diffusing impurity ions, adiabatic compression, and self-consistent neutral-...
The impurity radiation for typical tokamak parameters has been numerically calculated using an "average-ion model". Coronal equilibrium values for the emission of oxygen, iron, molybdenum, tungsten and gold were determined from the steady-state solutions of a set of related rate equations which included the effects of electron collisional ionizatio...
A zero-dimensional time-dependent energy balance model is used to explore the energy loss mechanisms of the CTX spheromak experiment at Los Alamos National Laboratory. A coupled set of model equations representing electron, ion, neutral, and impurity particle balance, electron and ion temperature, and magnetic field decay, are solved from initial v...
Introduction of large amounts of neon into Ohmically heated deuterium discharges in the PLT tokamakl results in higher central electron temperature (Te(0) 3 keV) and values of electron energy containment time that are larger than in regular discharges at the same electron density (τEe = 44 ms at e = 2 × 1019 m−3). For steady-state discharges with h...
The efficiency of neutral beam heating and current drive depends crucially on the deposition of the energy and momentum of the beam in the plasma. This deposition is determined by the atomic processes involved in the stopping (or effective ionization) of the neutral beam atoms. These processes have been studied in detail for the energy range from 1...
The advantages and feasibility of neutral beams with Z ≥ 3 formed from negative ions, accelerated to 0.5–1.0 MeVamu−1, and neutralized, are investigated for use in tandem mirror reactor end plugs. A reactor plasma physics design incorporating these beams has been done with the result that such a reactor could produce Q's (ratio of fusion power to i...
The Computational Research and Engineering Acquisition Tools and Environments (CREATE) Program was established as a new 12-year program in FY 2008 by the Department of Defense (DoD). The CREATE goal is to enable major improvements in DoD's acquisition engineering design and analysis processes by developing and deploying scalable, multi-disciplinary...
The Computational Research and Engineering Acquisition Tools and Environments (CREATE) Program is charged with positively impacting the US Department of Defense (DoD) Acquisition Process via Computational Engineering for capability gaps identified by the CREATE Boards of Directors. These prioritized requirements have been characterized and are annu...
The three papers in this special issue describe the promises and challenges of "design through analysis" product development.
The Nene code project uses minimal but effective processes to develop a physics-based application code for analyzing and predicting complex behaviors and interactions among individual physical systems and particles. Although the expert core development team is anchored at a university as many as 250 individual researchers have contributed from othe...
In 1998, Numrich and Reid proposed Coarray Fortran as a simple set of extensions to Fortran 95 [7]. Their principal extension to Fortran was support for shared data known as coarrays. In 2005, the Fortran Standards Committee began exploring the ...
The field of engineering is poised to enter a new and exciting era. The exponential growth in computing capability from one floating-point operation per second (FLOPS) in 1945 to 1015 in 2008 is helping us replace the standard engineering process of iterated empirical design-build-test cycles with an iterated design-mesh-analyze paradigm based on p...
In FY2008, the U.S. Department of Defense (DoD) initiated the Computational Research and Engineering Acquisition Tools and Environments (CREATE) program, a $360M program with a two-year planning phase and a ten-year execution phase. CREATE will develop and deploy three computational engineering tool sets for DoD acquisition programs to use to desig...
Computational science is increasingly supporting advances in scientific and engineering knowledge. The unique constraints of these types of projects result in a development process that differs from the process more traditional information technology projects use. This article reports the results of the sixth case study conducted under the support...
The US Department of Defense is reducing its dependence on the traditional "design, build, break, and fix" paradigm for designing and testing weapons systems by supplementing empirical testing with computational science and engineering. This introduction to the theme issue explores how the DoD's computational systems have contributed to various sci...
The need for high performance computing applications for computational science and engineering projects is growing rapidly, yet there have been few detailed studies of the software engineering process used for these applications. The DARPA High Productivity Computing Systems Program has sponsored a series of case studies of representative computati...
Energetic neutral beams are used for heating and diagnostics in present magnetic fusion experiments. They are also being considered for use in future large experiments. Atomic physics issues are important for both the production of the neutral beams and the interaction of the beams and the plasma. Interest in neutral beams based on negative hydroge...
The Compact Ignition Tokamak (CIT) is a proposed modest-size ignition experiment designed to study the physics of alpha particle heating. The basic concept is to achieve ignition in a modest-size minimum cost experiment by using a high plasma density to achieve nτE ~ 2 × 1020s/m3 required for ignition. The high density requires a high toroidal fiel...
This paper describes observations about software development for high end computing that we have made from several environments. We conducted a series of case studies of different types of codes, from academic codes to codes from governmental agencies. Based on those studies, we have formed a series of observations, some common and some different a...
The High Productivity Computing Systems (HPCS) program seeks a tenfold productivity increase in High Performance Computing (HPC). A change of this magnitude in software development and maintenance demands a transformation similar to other great leaps in industrial productivity. By analogy, this requires a dramatic change to the "infrastructure" and...
The field of computational science is growing rapidly. Yet there have been few detailed studies of the development processes for high performance computing applications. As part of the High Productivity Computing Systems (HPCS) program we are conducting a series of case studies of representative computational science projects to identify the steps...
The field has reached a threshold at which better organization becomes crucial. New methods of verifying and validating complex codes are mandatory if computational science is to fulfill its promise for science and society.
Many institutions are now developing large-scale, complex, coupled multiphysics computational simulations for massively parallel platforms for the simulation of the performance of nuclear weapons and certification of the stockpile, and for research in climate and weather prediction, magnetic and inertial fusion energy, environmental systems, astrop...
Computer simulation is becoming a very powerful tool for analyzing and
predicting the performance of fusion experiments. Simulation efforts are
evolving from including only a few effects to many effects, from small
teams with a few people to large teams, and from workstations and small
processor count parallel computers to massively parallel platfo...
An encompassing goal of contemporary scientific computing is to provide quantitatively accurate predictions that can help society make important decisions. The span of this intended influence includes such widely different fields as astrophysics, weather and climate forecasting, quantitative economic policy, environmental regulation, and performanc...
In this second of two issues devoted to the frontiers of simulation, we feature four articles that illustrate the diversity of computational applications of complex physical phenomena. A major challenge for computational simulations is how to accurately calculate the effects of interacting phenomena, especially when such phenomena evolve with diffe...
The march toward increased computing power has opened new vistas and opportunities for computer simulations of nonlinear, complex, physical phenomena that involve the interaction of many different effects. In August 2002, the Los Alamos National Laboratory Center for Nonlinear Studies examined the state of the art of this capability in a conference...
The Fusion Simulation Project, $20M/year multi-institutional project to develop a comprehensive simulation capability for magnetic fusion experiments with a focus on the International Thermonuclear Experimental Reactor (ITER), is discussed. The FSP is jointly supported by the DOE Office of Fusion Energy Sciences and the DOE Office of Advanced Scien...
Fusion materials that are low activation, have long lifetimes and can withstand high neutron fluxes are essential for fusion energy for both MFE and IFE fusion power systems. The DOE Workshop on Advanced Computational Material Science that met in Washington D.C in March 2004 determined that materials testing with a prototypical fusion neutron spect...
The High Performance Computer and Computational Science communities face three major challenges: The Performance Challenge, making the next generation of high performance computers, The Programming Challenge, writing codes that can run on the next generation of very complicated computers, and The Prediction Challenge, writing very complex codes tha...
The magnetic fusion program has proposed a 20M dollar per year project to develop a computational predictive capability for magnetic fusion experiments. The DOE NNSA launched a program in 1996, the Accelerated Strategic Computing Initiative (ASCI) to achieve the same goal for nuclear weapons to allow certification of the stockpile without testing....
As pointed out in the Fusion Community "35 Year Plan" presented at the APS DPP in 2002 in Orlando, fusion materials that are low activation, have long lifetimes and can withstand high neutron fluxes are essential for fusion energy for both MFE and IFE fusion power systems. The present US 35 year proposed strategy for fusion materials development ca...
A source of 14 MeV neutrons with fusion reactor level neutron fluxes and fluences is essential for the development of fusion reactor materials. The International Fusion Materials Irradiation Facility (IFMIF) team [1] has developed a well-advanced design to provide this source for $800M (1996$). However, the IFMIF is regrettably facing delays partia...
Features incorporated in the design of the International Thermonuclear Experimental Reactor (ITER) tokamak and its ancillary and plasma diagnostic systems that facilitate operation and control of ignited and/or high Q DT plasmas are presented. Control methods based upon straightforward extrapolation of techniques employed in the present generation...
Each year, computers grow more powerful, and we use them to solve increasingly complex and important problems. In this issue, we explore some limits to this growth.
This Panel was set up by the Fusion Energy Sciences Advisory Committee (FESAC) at its November 2000 meeting for the purpose of addressing questions from the Department of Energy concerning the theory and computing/simulation program of the Office of Fusion Energy Sciences. Although the Panel primarily addressed programmatic questions, it acknowledg...
Physics knowledge (theory and experiment) in energetic particles relevant to design of a reactor scale tokamak is reviewed, and projections for ITER are provided in this Chapter of the ITER Physics Basis. The review includes single particle effects such as classical alpha particle heating and toroidal field ripple loss, as well as collective instab...
The ITER Physics Basis presents and evaluates the physics rules and methodologies for plasma performance projections, which provide the basis for the design of a tokamak burning plasma device whose goal is to demonstrate the scientific and technological feasibility of fusion energy for peaceful purposes. This Chapter summarizes the physics basis fo...
Physics knowledge in plasma confinement and transport relevant to design of a reactor-scale tokamak is reviewed and methodologies for projecting confinement properties to ITER are provided. Theoretical approaches to describing a turbulent plasma transport in a tokamak are outlined and phenomenology of major energy confinement regimes observed in to...
This is the May 1996 report of a subpanel of the US Department of Energy Fusion Energy Sciences Advisory Committee (FESAC), charged with conducting a review of the progress, priorities and potential near-term contributions of TFTR, DIII-D and Alcator C-MOD (and other facilities as appropriate) as part of the transition to a Fusion Energy Sciences P...
A dynamical model has been developed which solves a time-dependent coupled system of equations for the (1) plasma density and (2) hydrogenic concentrations in the implantation layer of several representative plasma facing/limiting walls in tokamaks in order to examine the density behaviour during the transient phase of discharges as well as long pu...
Recently both the Alcator C-Mod and DIII-D tokamaks observed significant recombination of major ion species in the divertor region during detachment. For sufficiently low temperatures the mixture of neutral atoms and ions can be optically thick to line radiation. The optical depth of the recombined region to Lyα radiation can be very large and opac...
We propose a set of unified models to describe H-mode operation of the core plasma, the edge plasma including the transport barrier, and the scrape-off layer plasma to extrapolate to ITER. These models are being incorporated into several tokamak transport codes for use in extrapolating to ITER. Edge pedestal temperatures of 3.5-4 keV for edge densi...
Charge exchange with neutral hydrogen is examined as a recombination mechanism for multicharged impurity ions present in high-temperature fusion plasmas. At sufficiently low electron densities, fluxes of atomic hydrogen produced by either the injection of neutral heating beams or the background of 'thermal' neutrals can yield an important or even d...
During the past five years, the main theoretical efforts for tokamak edge plasmas have focused on the physics of detached regimes observed in divertor tokamaks [1-4] and divertor simulators [5-8] These regimes have very low heat loads on plasma facing components and represent a solution to the problem of excessive heat loads. Although the plasma pa...
The physics knowledge relevant to the design of a reactor-scale tokamak - the ITER Physics Basis - has recently been assessed by the ITER JCT, the ITER Home Teams, and the ITER Physics Expert Groups. Physics design guidelines and methodologies for projecting plasma performance in ITER and reactor tokamaks are developed from extrapolations of variou...
Atomic processes have played a key role in the success of the present and the next generation of magnetically and inertially confined controlled
fusion
experiments. Magnetic fusion
experiments are beginning to access the plasma regimes needed for fusion reactors. Recent experiments on the TFTR tokamak at Princeton and the JET tokamak at Abingdon...
The primary purpose of `Computational Atomic Structure' is to give a potential user of the Multi-Configuration Hartree-Fock (MCHF) Atomic Structure Package an outline of the physics and computational methods in the package, guidance on how to use the package, and information on how to interpret and use the computational results. The book is success...
Measurement of plasma and key first wall parameters will have three main roles on ITER. Some of the measurements will be used in real time to prevent the on-set of conditions which could potentially damage the first wall and other in-vessel components (machine protection); others will be used in real-time feedback control loops to control the value...
In the last Diagnostic Workshop, which took place at the mid-point of the ITER EDA, the ITER machine design and physics performance was presented [1] based on the Interim Design Report (IDR). Whereas the design of several major machine components (e.g. TF-coils) were changed considerably compared to the earlier Outline Design (OD), it was pointed o...
The limits on H-mode and high density operation for ITER are determined by the density and temperature at the top of the H-mode transport barrier(M. Kaufmann, et al., 1996 IAEA Montreal, IAEA-CN-64/O2-5). To study this physics and aid in projections for ITER, we have compiled a database of H and L-mode edge profile parameters from Alcator C-Mod, AS...
Recent tokamak divertor experiments with “detached” and “partially detached” divertor regimes show that a major portion of the heating power can be radiated to the walls to reduce the peak heat loads on divertor plates and that a combination of fueling and pumping can control the plasma density and exhaust impurities such as He. Tokamak transport c...
A model has been developed to investigate the requirements for terminating ITER discharges by introducing large quantities of either hydrogen or impurities to radiate the energy and quench the discharge before the discharge can undergo substantial motion and come into contact with the wall. The results of the model indicate that the injection of im...
This paper reviews the status of the design of the divertor and first-wall/shield, the main in-vessel components for ITER. Under nominal ignited conditions, 300 MW of alpha power will be produced and must be removed from the divertor and first-wall. Additional power from auxiliary sources up to the level of 100 MW must also be removed in the case o...
Key objectives of the first ten years of ITER operation are the investigation of the physics of burning plasmas and the demonstration of long-pulse ignited plasma technologies. These include studies of plasma confinement and stability, divertor operation, disruption mitigation and control, noninductive current drive, and steady state operation unde...
This paper describes models that address both power and particle control with the ITER divertor. For power control, the characteristics of the transition from attached to detached divertor operation are studied with an extension of our previous simple model [Y. Igitkhanov et al., 22nd EPS, Bournemouth (1995) p. IV-317]. This new model has been appl...
Recent B2-EIRENE modelling indicates that radiation losses in the divertor, due mostly to graphite, can potentially exhaust up to 180 MW in the divertor. This minimizes the potential cooling of the plasma edge and the potentially negative impact on energy confinement. This "partially detached" regime confines the neutrals in the divertor and provid...
Physics design guidelines, plasma performance estimates, and sensitivity of performance to changes in physics assumptions are presented for the ITER-EDA Interim Design. The overall ITER device parameters have been derived from the performance goals using physics guidelines based on the physics R&D results. The ITER-EDA design has a single-null dive...
Plasma operation conditions and physics requirements to be used as a basis for safety analysis studies are developed and physics results motivated by safety considerations are presented for the ITER design. Physics guidelines and specifications for enveloping plasma dynamic events for Category I (operational event), Category II (likely event), and...
The ITER power and particle control system is designed to exhaust the 300 to 400 MW of alpha and auxiliary heating power and the 5 × 10²⁰ He atoms per second created by the fusion reactions, to control the density and to fuel the plasma. The power and particle control system consists of a single null poloidal divertor, a set of active pumps with a...
Fusion power and burn duration performance attainable in ITER is affected by physics issues that include energy confinement, L-to-H and H-to-L-mode power transition thresholds, beta-limits, density limits, helium removal, impurity content and possible need to add medium-Z impurities to facilate pre-divertor radiation. To assess the performance expe...
The design studies for ITER and, indeed, for all proposed=20 reacting
plasma experiments have stressed both the extreme=20 difficulty of
exhausting the heating power and the importance of controlling the
plasma performance and fusion power by=20 exhausting He ash and
controlling the plasma density. The=20 peak heat loads on the ITER
divertor plate...
Plasma operation conditions and physics requirements to be used as a basis for safety analysis studies are developed and physics results motivated by safety considerations are presented for the ITER design. Physics guidelines and specifications for enveloping plasma dynamic events for Category I (operational event), Category II (likely event), and...
Since the beginning of magnetic fusion research, reducing the impurity level in experiments has been strongly correlated with successful achievement of high performance plasmas. One of the most important examples of this was the recognition that the use of tungsten as a plasma facing material and the associated high radiative losses were responsibl...
It is planned to use atomic processes to spread out most of the heating power over the first wall and side walls to reduce the heat loads on the plasma facing components in ITER to ~ 50 MW. Calculations indicate that there will be 100 MW in bremstrahlung radiation from the plasma center, 50 MW of radiation from the plasma edge inside the separatrix...