A REVIEW OF THE 1981 PAPER BY HANSEN et al
Preprints and early-stage research may not have been peer reviewed yet.
Abstract
Eisenhower’s warning about the corruption of science by government funding has come true. Climate science has been thoroughly corrupted by government largesse. The modern climate modeling fraud started with four papers, two by Manabe and Wetherald (M&W) in 1967 and 1975 and two by Hansen et al in 1976 and 1981. The main focus of this article is a detailed review of the 1981 Hansen paper (H81). This provided the foundation for the multi-trillion dollar climate fraud that we have today. It is claimed that a contrived time series of radiative forcings or changes in flux at the top of the atmosphere can be used to calculate a global mean temperature record using ‘equilibrium’ climate models. The radiative forcings are divided into anthropogenic and natural contributions. The models are rerun using just the natural forcings to create an imaginary natural baseline. The anthropogenic forcings are then manipulated to claim that they are the cause of every imaginable increase in the frequency and intensity of ‘extreme weather events’. The models are ‘tuned’ to match the temperature record using feedbacks that modify the response to the initial forcings. In particular, there is a ‘water vapor feedback’ that amplifies the initial warming produced by a ‘greenhouse gas forcing’. The climate models are compared to each other using a hypothetical doubling of the atmospheric concentration of CO2 from 280 to 560 parts per million (ppm). The temperature increase from such a doubling is called the climate sensitivity. All of this is pseudoscientific nonsense. The first example of this approach can be found in H81 figure 5. A one dimensional radiative convective (1-D RC) model was used to create an approximate match to the global mean temperature record using a combination of CO2, volcanic aerosol and solar forcings. The fraudulent use of radiative forcings to make the climate model results appear to match the global mean temperature record continues today. The time series of the radiative forcings used in the CMIP6 model ensemble may be found in figure 2.10 of the Working Group 1 Report for the Sixth IPCC Climate Assessment, AR6.
Starting with the work of M&W in 1967, the ‘equilibrium’ climate models are fraudulent, by definition, before a single line of code is even written. This is because the simplified energy transfer assumptions used to build the models must create climate warming as a mathematical artifact in the model output. There is no ‘equilibrium average climate’ that can be perturbed by CO2 or other ‘greenhouse gases’. As soon as the simplifying assumptions used by M&W are accepted, physical reality is abandoned and one enters the realm of computational climate fiction. Hansen and his group at NASA Goddard followed M&W into their fictional climate realm and have been playing computer games in an equilibrium climate fantasy land ever since.
M&W set out to adapt a weather forecasting model so that it could predict ‘climate’. They apparently failed to understand that the earth is not in thermal equilibrium and that the coupled non-linear equations used in a global circulation model are unstable and have no predictive capabilities over the time scales needed for climate analysis. A major justification for the M&W approach was that it provided a second stream of funding for the computers and programmers needed for both weather and climate prediction. Unfortunately, melodramatic prophecies of the global warming apocalypse became such a good source of research funding that the scientific process of hypothesis and discovery collapsed. The climate modelers rapidly became trapped in a web of lies of their own making. The second motivation was employment. After the end of the Apollo (moon landing) program in 1972, NASA was desperate for funding and a group at NASA Goddard that was using radiative transfer analysis to study planetary atmospheres jumped on the climate bandwagon. Later, as funding for nuclear programs decreased, some of the scientists at the old Atomic Energy Commission, by then part of the Department of Energy (DOE) also jumped on the climate bandwagon. No one bothered to look at the underlying assumptions. A paycheck was more important. They just copied and ‘improved’ the fraudulent computer code.
The only thing that has changed since 1981 is that the models have become more complex. The pseudoscientific ritual of radiative forcing, feedbacks and climate sensitivities that started with H81 continues on a massive scale today. The latest iteration is described in Chapter 7 of the IPCC WG1 AR6 Report. The two original modeling groups have now grown to about 50, all copying each other and following the same fraudulent script. Climate modeling has degenerated past scientific dogma into the Imperial Cult of the Global Warming Apocalypse. The climate models have been transformed into political models that are ‘tuned’ to give the results needed for continued government funding. The climate modelers are no longer scientists. They are prophets of the Imperial Cult who must continue to believe in their own prophesies based on forcings, feedbacks and climate sensitivity.
ResearchGate has not been able to resolve any citations for this publication.
There was an overwhelming scientific consensus in the 1970s that the Earth was heading into a period of significant cooling. The possibility of anthropogenic warming was relegated to a minority of the papers in the peer-reviewed literature.
We describe the historical evolution of the conceptualization, formulation, quantification, application, and utilization of “radiative forcing” (RF) of Earth’s climate. Basic theories of shortwave and longwave radiation were developed through the nineteenth and twentieth centuries and established the analytical framework for defining and quantifying the perturbations to Earth’s radiative energy balance by natural and anthropogenic influences. The insight that Earth’s climate could be radiatively forced by changes in carbon dioxide, first introduced in the nineteenth century, gained empirical support with sustained observations of the atmospheric concentrations of the gas beginning in 1957. Advances in laboratory and field measurements, theory, instrumentation, computational technology, data, and analysis of well-mixed greenhouse gases and the global climate system through the twentieth century enabled the development and formalism of RF; this allowed RF to be related to changes in global-mean surface temperature with the aid of increasingly sophisticated models. This in turn led to RF becoming firmly established as a principal concept in climate science by 1990. The linkage with surface temperature has proven to be the most important application of the RF concept, enabling a simple metric to evaluate the relative climate impacts of different agents. The late 1970s and 1980s saw accelerated developments in quantification, including the first assessment of the effect of the forcing due to the doubling of carbon dioxide on climate (the “Charney” report). The concept was subsequently extended to a wide variety of agents beyond well-mixed greenhouse gases (WMGHGs; carbon dioxide, methane, nitrous oxide, and halocarbons) to short-lived species such as ozone. The WMO and IPCC international assessments began the important sequence of periodic evaluations and quantifications of the forcings by natural (solar irradiance changes and stratospheric aerosols resulting from volcanic eruptions) and a growing set of anthropogenic agents (WMGHGs, ozone, aerosols, land surface changes, contrails). From the 1990s to the present, knowledge and scientific confidence in the radiative agents acting on the climate system have proliferated. The conceptual basis of RF has also evolved as both our understanding of the way radiative forcing drives climate change and the diversity of the forcing mechanisms have grown. This has led to the current situation where “effective radiative forcing” (ERF) is regarded as the preferred practical definition of radiative forcing in order to better capture the link between forcing and global-mean surface temperature change. The use of ERF, however, comes with its own attendant issues, including challenges in its diagnosis from climate models, its applications to small forcings, and blurring of the distinction between rapid climate adjustments (fast responses) and climate feedbacks; this will necessitate further elaboration of its utility in the future. Global climate model simulations of radiative perturbations by various agents have established how the forcings affect other climate variables besides temperature (e.g., precipitation). The forcing–response linkage as simulated by models, including the diversity in the spatial distribution of forcings by the different agents, has provided a practical demonstration of the effectiveness of agents in perturbing the radiative energy balance and causing climate changes. The significant advances over the past half century have established, with very high confidence, that the global-mean ERF due to human activity since preindustrial times is positive (the 2013 IPCC assessment gives a best estimate of 2.3 W m ⁻² , with a range from 1.1 to 3.3 W m ⁻² ; 90% confidence interval). Further, except in the immediate aftermath of climatically significant volcanic eruptions, the net anthropogenic forcing dominates over natural radiative forcing mechanisms. Nevertheless, the substantial remaining uncertainty in the net anthropogenic ERF leads to large uncertainties in estimates of climate sensitivity from observations and in predicting future climate impacts. The uncertainty in the ERF arises principally from the incorporation of the rapid climate adjustments in the formulation, the well-recognized difficulties in characterizing the preindustrial state of the atmosphere, and the incomplete knowledge of the interactions of aerosols with clouds. This uncertainty impairs the quantitative evaluation of climate adaptation and mitigation pathways in the future. A grand challenge in Earth system science lies in continuing to sustain the relatively simple essence of the radiative forcing concept in a form similar to that originally devised, and at the same time improving the quantification of the forcing. This, in turn, demands an accurate, yet increasingly complex and comprehensive, accounting of the relevant processes in the climate system.
We present detailed line-by-line radiation transfer calculations, which were performed under different atmospheric conditions for the most important greenhouse gases water vapor, carbon dioxide, methane, and ozone. Particularly cloud effects, surface temperature variations, and humidity changes as well as molecular lineshape effects are investigated to examine their specific influence on some basic climatologic parameters like the radiative forcing, the long wave absorptivity, and back-radiation as a function of an increasing CO2 concentration in the atmosphere. These calculations are used to assess the CO2 global warming by means of an advanced two-layer climate model and to disclose some larger discrepancies in calculating the climate sensitivity. Including solar and cloud effects as well as all relevant feedback processes our simulations give an equilibrium climate sensitivity of Cs = 0.7°C (temperature increase at doubled CO2) and a solar sensitivity of Ss = 0.17°C (at 0.1% increase of the total solar irradiance). Then CO2 contributes 40% and the Sun 60% to global warming over the last century.
Editors note: For easy download the posted pdf of the Explaining Extreme Events of 2020 is a very low-resolution file. A high-resolution copy of the report is available by clicking here . Please be patient as it may take a few minutes for the high-resolution file to download.
It has always been complicated mathematically, to calculate the average near surface atmospheric temperature on planetary bodies with a thick atmosphere. Usually, the Stefan Boltzmann (S-B) black body law is used to provide the effective temperature, then debate arises about the size or relevance of additional factors, including the ‘greenhouse effect’. Presented here is a simple and reliable method of accurately calculating the average near surface atmospheric temperature on planetary bodies which possess a surface atmospheric pressure of over 10kPa. This method requires a gas constant and the knowledge of only three gas parameters; the average near-surface atmospheric pressure, the average near surface atmospheric density and the average mean molar mass of the near-surface atmosphere. The formula used is the molar version of the ideal gas law. It is here demonstrated that the information contained in just these three gas parameters alone is an extremely accurate predictor of atmospheric temperatures on planets with atmospheres >10kPa. This indicates that all information on the effective plus the residual near-surface atmospheric temperature on planetary bodies with thick atmospheres, is automatically ‘baked-in’ to the three mentioned gas parameters. Given this, it is shown that no one gas has an anomalous effect on atmospheric temperatures that is significantly more than any other gas. In short; there can be no 33°C ‘greenhouse effect’ on Earth, or any significant ‘greenhouse effect’ on any other planetary body with an atmosphere of >10kPa. Instead, it is a postulate of this hypothesis that the residual temperature difference of 33°C between the S-B effective temperature and the measured near-surface temperature is actually caused by adiabatic auto-compression.
It follows from the analysis of observation data that the secular variation of the mean temperature of the Earth can be explained by the variation of short-wave radiation, arriving at the surface of the Earth. In connection with this, the influence of long-term changes of radiation, caused by variations of atmospheric transparency on the thermal regime is being studied. Taking into account the influence of changes of planetary albedo of the Earth under the development of glaciations on the thermal regime, it is found that comparatively small variations of atmospheric transparency could be sufficient for the development of quaternary glaciations. DOI: 10.1111/j.2153-3490.1969.tb00466.x