UNDERSTANDING THE CLIMATE FRAUD



Introduction to Ventura Photonics Climate Posts 12 to 16, VPCP 12_16


Roy Clark



Only an introduction and set of summaries are posted here.

The full posts are available as .pdf files

The links to the full .pdf posts are:


Climate Pseudoscience, VPCP 012.1

The Growth of the Climate Fraud, VPCP 013.1

Nine Papers that Reveal the Climate Fraud, VPCP 014.1

A Review of Hansen et al, 1993, VPCP 015.1

The Radiation Balance of the Earth, VPCP 016.1


A ,pdf file of this introduction is available at:

ClimateFraud.Intro, VPCP 012_016



Mainstream climate science has degenerated past scientific dogma into a rather unpleasant ‘Imperial Cult of the Global Warming Apocalypse’. The climate modelers have abandoned physical reality and chosen to play computer games in an equilibrium climate fantasy land of forcings, feedbacks and a climate sensitivity to CO2. They are now trapped in a web of lies of their own making. The multi-trillion dollar climate fraud that we have today started in nineteenth century with speculation that changes in the atmospheric concentration of CO2 could cycle the earth through an Ice Age. The complexities of the climate system were oversimplified and reduced to an ‘equilibrium air column’. When the CO2 concentration was increased, this approach had to produce an increase in surface temperature as a mathematical artifact of the calculation. The first estimates of the effect of changes in atmospheric CO2 concentration on the ‘temperature of the ground’ were published by Arrhenius in 1896. He used the ‘equilibrium air column’ model so his results were mathematical artifacts that had nothing to do with the surface temperature of the earth. However, the idea that CO2 could influence the earth’s climate gradually became scientific dogma. Instead of an Ice Age cycle, humans were now causing climate change through fossil fuel combustion.


The first generally accepted ‘radiative convective equilibrium’ computer climate model was published by Manabe and Wetherald (M&W) in 1967. It was simply an ‘improved’ version of the nineteenth century ‘equilibrium air column’. By making the equilibrium climate assumption M&W abandoned physical reality and entered the realm of computational climate fiction. Their model contained four fundamental scientific errors which they chose to ignore. They spent the next 8 years incorporating their 1967 air column model into a ‘highly simplified’ general circulation model (GCM). The mathematical warming artifacts from M&W (1967) were now incorporated into every unit cell of the GCM. Later climate modeling work failed to address the errors in the underlying M&W assumptions. Instead ‘improvements’ were introduced by Hansen et al in 1981 that added three more fundamental scientific errors. Little has changed since 1981 except that computer technology has improved significantly and the models have become a lot more complex. ‘Efficacies’ were added to the radiative forcings by Hansen et al in 2005. However, the underlying assumptions remain the same. The fundamental error is still the equilibrium assumption. This was conveniently summarized by Knutti and Hegerl in 2008.


“When the radiation balance of the Earth is perturbed, the global surface temperature will warm and adjust to a new equilibrium state”.


Such an equilibrium state does not exist.


Melodramatic prophecies of the global warming apocalypse became such a good source of research funding that the scientific process of hypothesis and discovery collapsed. Irrational belief in the ‘Sacred Spaghetti Plots’ of global warming created by the climate models has become a prerequisite for funding in climate science. The underlying climate equilibrium assumption was never challenged. An elaborate modeling ritual based on pseudoscientific radiative forcings, feedbacks and the climate sensitivity to a ‘CO2 doubling’ has grown into a worldwide propaganda machine involving at least 50 different climate modeling groups. Eisenhower’s warning about the corruption of science by government funding has come true.


Two external factors contributed to the growth of the climate fraud. As funding was reduced for NASA space exploration and DOE nuclear programs, climate modeling became an alternative source of revenue. There was also a deliberate decision by various outside interests, including environmentalists and politicians to exploit the fictional climate apocalypse to further their own causes. The World Meteorological Organization (WMO) and the United Nations Environmental Program (UNEP) were used to promote the global warming scare. The UN Intergovernmental Panel on Climate Change (UN IPCC) was established in 1988 and the US Global Change Research Program (USGCRP) was established by Presidential initiative in 1989 and mandated by Congress in 1990.


There is no single person or single event that can be identified as the source of the climate fraud. There is no ‘smoking gun’. The climate fraud may be described as a confluence of special interest ‘bandwagons’ that have coalesced into a quasi-religious cult. The climate modelers have trapped themselves in a web of lies from which they can’t escape. In the US, the Imperial Cult of the Global Warming Apocalypse is firmly entrenched in the ‘deep state’ of the US government through the USGCRP. Thirteen government agencies are involved. Since it was first established in 1989, the USGCRP has used the results of fraudulent ‘equilibrium’ climate models to perpetuate a massive Ponzi or pyramid scheme based on exaggerated claims of anthropogenic global warming. It takes the climate model output generated by agencies such as NASA and DOE and without question cycles the fake climate warming through the 13 US Agencies to establish a US climate policy that mitigates a nonexistent problem. The same group of climate modelers also provides fraudulent climate warming data for use by the IPCC in their assessment reports. The pigs have been filling their own trough at taxpayer expense for over 30 years. There has been no significant oversight. The climate modelers are no longer scientists, they have become prophets of the Imperial Cult who must continue with their pseudoscientific beliefs and save the world from a non-existent problem. They have to perpetuate the climate fraud to keep their jobs and avoid the legal consequences of their actions.


The rest of this article is divided into 5 separate posts that address different aspects of the climate fraud. The contents and summary of each post are provided here. The full posts may be downloaded as .pdf files. The first post ‘Climate Pseudoscience’ explains the scientific errors introduced by the equilibrium climate assumption. A ‘CO2 doubling’ produces an initial wavelength specific decrease in the LWIR flux emitted to space at the top of the atmosphere (TOA). A change in flux at TOA is called a ‘radiative forcing’. However, here it is the result of absorption at lower levels in the atmosphere. There is no equilibrium, so the flux change has to be analyzed as change in the rate of cooling of the atmosphere. In the troposphere, a ‘CO2 doubling’ produces a maximum decrease in the rate of cooling of +0.08 K per day. At an average lapse rate of -6.5 K km-1, an increase of 0.08 K is produced by a decrease in altitude of about 12 meters. This is equivalent to riding an elevator down 4 floors. The slight heating is dissipated by a combination of wideband LWIR emission and turbulent convective mixing. It cannot change the surface temperature. A ‘CO2 doubling’ also produces a small increase in the downward LWIR flux from the lower troposphere to the surface. Over the oceans, this is fully coupled to the much larger and more variable wind driven evaporation or latent heat flux. Over land, the surface temperature is reset each day by the local weather system that changes the diurnal convection transition temperature. There can be no ‘climate sensitivity to CO2, nor can the ‘radiative forcing’ from ‘greenhouse gases’ change the radiative energy balance of the earth.


The second post addresses the evolution of the climate fraud from the nineteenth century ‘equilibrium air column’ to the ‘Imperial Cult of the Global Warming Apocalypse’ responsible for multitrillion dollar climate fraud of today. This includes the failure to correct the equilibrium assumption used in the climate models, ‘mission creep’ by government agencies in search of funding and the exploitation of the ‘global warming apocalypse’ by various outside groups.


The third post provides a review of 9 ‘scientific’ papers that reveal the equilibrium climate fraud. The first five papers deal with the development of the climate models: Arrhenius [1896], Manabe and Wetherald [1967], [1975], Hansen et al [1981] and Knutti and Hegerl [2008]. The next two papers consider the creation of a ‘climate sensitivity’ to CO2 using the ‘global mean temperature change’: Gregory et al [2020] and Otto et al [2013]. The next paper addresses ‘radiative forcing’ as explained by Ramaswamy et al [2019]. The final paper examines the process of ‘climate attribution’ as described by Herring et al [2022]. These papers provide a convenient starting point for further investigation of the climate fraud.


The fourth post is review of the 1993 paper by Hansen et al ‘How sensitive is the world’s climate?’ This provides a convenient description of the status of climate modeling 5 years after the United Nations Intergovernmental Panel on Climate Change (IPCC) was formed. It contains the same pseudoscientific nonsense about radiative forcings, feedbacks and a climate sensitivity to CO2 that is contained in the 2021 IPCC climate assessment. Little has changed since this paper was published in 1993, except that the computer technology has improved significantly and the climate models have become larger and more complex. The short answer to the title of this paper is that any climate sensitivity to CO2 is ‘too small to measure’.


The fifth post explains the radiation balance of the earth in terms of non-equilibrium energy transfer. This includes the attenuation of the solar flux, energy storage in the climate system, the LWIR surface exchange energy, the tropospheric heat engine and the outgoing longwave radiation. The ‘effective emission temperature’, the oversimplification of the radiation budget, radiative forcing and the influence of ocean oscillations are then considered.




CLIMATE PSEUDOSCIENCE


Roy Clark PhD


Ventura Photonics Climate Post 12, VPCP 012.1


Link to Full Post: VPCP 012.1






Summary


The observed increase in the atmospheric CO2 concentration of 140 ppm since the start of the Industrial Revolution has produced an initial decrease of approximately 2 W m-2 in the longwave IR (LWIR) emitted to space at the top of the atmosphere (TOA) within the spectral regions of the CO2 emission bands. The climate modelers assume that this ‘radiative forcing’ perturbs the ‘radiation balance’ of the earth and that the surface temperature increases until a new ‘equilibrium state’ is reached. This increase in temperature is also amplified by a ‘water vapor feedback’. The ‘equilibrium climate sensitivity’ (ECS) to a doubling of the CO2 concentration from 280 to 560 ppm for the CMIP6 climate model ‘ensemble’ is between 1.8 and 5.6 °C. These forcings, feedbacks and ECSs are pseudoscientific nonsense. A doubling of the CO2 concentration produces a maximum change of +0.08 K per day in the rate of cooling of the troposphere. This slight warming is too small to measure. At a lapse rate of -6.5 K km-1, a change in temperature of +0.08 K is produced by a decrease in altitude of approximately 12 meters. This is equivalent to riding an elevator down four floors.


At the surface, the penetration depth of the LWIR flux into the ocean is less than 100 microns. Here it is fully coupled to the wind driven evaporation. An increase in the atmospheric CO2 concentration produces an increase in downward LWIR flux to the surface that is similar in magnitude to the decrease at TOA. The surface temperature increase from this is too small to measure. The observed climate warming has been produced by the recent positive phase of the Atlantic Multi-decadal Oscillation (AMO) augmented by various bias effects such as urban heat islands, changes to the number and locations of the weather stations used in the climate averaging and other ‘adjustments’ to the raw weather station data. The climate models have simply been ‘tuned’ to match the ‘adjusted’ global average temperature record. A contrived set of radiative forcings has been created to give the illusion that an increase in ‘greenhouse gases’ has caused the recent warming. The First Law of Thermodynamics, Conservation of Energy, requires that any absorbed solar flux that is not returned to space as LWIR emission remain somewhere in the climate system. The idea that there should be an exact planetary flux balance that is ‘perturbed’ by an increase in atmospheric CO2 concentration is incorrect. There can be no ‘climate sensitivity’ to CO2.


In addition, the climate models require the solution of very large numbers of coupled non-linear equations. The errors associated with this type of modeling grow over time. These are known as Lorenz instabilities. There is no reason to expect these climate models to have any predictive capabilities over the time scales involved in climate studies.


It is time to shut down the ‘equilibrium’ climate models and dismantle the massive multi-trillion dollar climate fraud.




THE GROWTH OF THE CLIMATE FRAUD


Roy Clark PhD


Ventura Photonics Climate Post 13, VPCP 013.1


Link to Full Post: VPCP 013.1





Summary


The multi-trillion dollar climate fraud that we have today started in nineteenth century with speculation that changes in the atmospheric concentration of CO2 could cycle the earth through an Ice Age. The complexities of the climate system were oversimplified and reduced to an ‘equilibrium air column’. When the CO2 concentration was increased, this approach had to produce an increase in surface temperature as a mathematical artifact of the calculation. The idea that CO2 could cause global warming gradually became scientific dogma. The first generally accepted ‘radiative convective equilibrium’ computer climate model was published by Manabe and Wetherald (M&W) in 1967. It was simply an ‘improved’ version of the nineteenth century ‘equilibrium air column’. By making the equilibrium climate assumption M&W abandoned physical reality and entered the realm of computational climate fiction. Their model contained four fundamental scientific errors which they chose to ignore. They spent the next 8 years incorporating their 1967 air column model into a ‘highly simplified’ general circulation model (GCM). The mathematical warming artifacts from M&W (1967) were now incorporated into every unit cell of the GCM. Later climate modeling work failed to address the errors in the underlying M&W assumptions. Instead ‘improvements’ were introduced by Hansen et al in 1981 that added three more fundamental scientific errors. Little has changed since 1981 except that computer technology has improved significantly and the models have become a lot more complex. However, the underlying assumptions remain the same. The fundamental error is still the equilibrium assumption. This was conveniently summarized by Knutti and Hegerl in 2008.


“When the radiation balance of the Earth is perturbed, the global surface temperature will warm and adjust to a new equilibrium state”.


Such an equilibrium state does not exist.


Melodramatic prophecies of the global warming apocalypse became such a good source of research funding that the scientific process of hypothesis and discovery collapsed. Scientific dogma has now degenerated into the ‘Imperial Cult of the Global Warming Apocalypse’. Irrational belief in the global warming created by the climate models has become a prerequisite for funding in climate science. The underlying climate equilibrium assumption was never challenged. An elaborate modeling ritual based on pseudoscientific radiative forcings, feedbacks and the climate sensitivity to a ‘CO2 doubling’ gradually evolved. The Charney report in 1979, included the initial results from two climate modeling groups using five primitive GCMs. By 1995, 18 coupled climate models were available from seven different countries. The modeling effort for the IPCC is now coordinated through the Coupled Model Intercomparison Project (CMIP). In 2019 there were 49 modeling groups with approximately 100 different models involved in CMIP6. One of the benchmarks used to compare these models is the climate sensitivity or ‘global temperature rise’ produced by a ‘CO2 doubling’. This is claimed to be in the range from 1.8 to 4.7 °C.


In reality, the ‘radiative forcing’ from a ‘CO2 doubling’ is a wavelength specific decrease of 3.7 W m-2 in LWIR flux emitted at the top of the atmosphere. The decrease in LWIR flux is produced by absorption at lower levels in that atmosphere. This changes the rate of cooling of the local air parcel. The maximum change in rate of cooling in the troposphere is +0.08 K per day. At a lapse rate (cooling rate) of -6.5 K km-1, an increase in temperature of 0.08 K is produced by a decrease in altitude of about 12 meters. This is equivalent to riding an elevator down 4 floors. There is also a similar increase in downward LWIR flux emitted by the lower troposphere to the surface. Here the temperature changes produced by this increase in flux are too small to measure in the normal variation of the daily and seasonal surface temperatures. The observed ‘climate sensitivity’ is dominated by the Atlantic Multi-decadal Oscillation coupled to the weather station record. There is also additional heating produced by urban heat islands, changes in the urban/rural mix of weather stations used to calculate the global average and various ‘homogenization adjustments’ used to infill data and correct for bias. A ‘CO2 doubling’ cannot produce a measureable increase in surface temperature. Nor can it have any influence on ‘extreme weather events’.


Two external factors contributed to the growth of the climate fraud. As funding was reduced for NASA space exploration and for DOE nuclear programs, climate modeling became an alternative source of revenue. There was also a deliberate decision by various outside interests, including environmentalists and politicians to exploit the fictional climate apocalypse to further their own causes. The World Meteorological Organization (WMO) and the United Nations Environmental Program (UNEP) were used to promote the global warming scare. The UN Intergovernmental Panel on Climate Change (UN IPCC) was established in 1988 and the US Global Change Research Program (USGCRP) was established by Presidential initiative in 1989 and mandated by Congress in 1990. It must be emphasized that the Intergovernmental Panel on Climate Change (IPCC) is a political body, not a scientific one. Its mission is to assess “the scientific, technical and socioeconomic information relevant for the understanding of the risk of human-induced climate change.” This is based on the a-priori assumption that human activities are causing CO2 induced global warming. The IPCC has published six major assessment reports: the first, second and third - FAR (1990), SAR (1995), TAR (2001) and AR4 (2007), AR5 (2013) and AR6 (2021). While the reports may contain a useful compendium of scientific references, material that does not conform to the global warming dogma has usually been omitted. The primary focus of these reports has been on the use of modeling ‘scenarios’ to make melodramatic predictions of future global warming using fraudulent computer models. The climate modeling data used for AR5 and AR6 are derived mainly from the CMIP5 and CMIP6 ‘model ensembles’.


In the UK, the Hadley Center for Climate Prediction was established at the Met. Office in 1989. In conjunction with the Climate Research Center at the University of E. Anglia, the Hadley Center provided major support to the IPCC. The first IPCC assessment report was published in 1990. Close ties developed between political leaders and various leading climate researchers. In the UK this included John Houghton (UK Met Office), the Climate Research Unit (CRU) at UEA and Margaret Thatcher (UK Prime Minister). The primary function of the climate centers is to provide climate propaganda to justify government policy and continued funding.


In the US, the Imperial Cult of the Global Warming Apocalypse is firmly entrenched in the ‘deep state’ of the US government through the USGCRP. Thirteen government agencies are involved. Since it was first established in 1989, the USGCRP has used the results of fraudulent ‘equilibrium’ climate models to perpetuate a massive Ponzi or pyramid scheme based on exaggerated claims of anthropogenic global warming. It takes the climate model output generated by agencies such as NASA and DOE and without question cycles the fake climate warming through the 13 US Agencies to establish a US climate policy that mitigates a nonexistent problem. The same group of climate modelers also provides fraudulent climate warming data used by the IPCC in their assessment reports. The pigs have been filling their own trough at taxpayer expense for over 30 years. There has been no significant oversight. The climate modelers are no longer scientists, they have become prophets of the Imperial Cult of the Global Warming Apocalypse. Irrational belief in ‘equilibrium’ climate model results has replaced scientific logic and reason. The climate modelers have been playing computer games in an equilibrium climate fantasy land for over 30 years.


CO2 is a good plant fertilizer, so there is a major agricultural benefit to an increase in CO2 concentration - enhanced agricultural production. There is no climate emergency. There is no need for utility scale solar or wind energy. There is no need for the large scale deployment of electric vehicles. It is time to dismantle the entire climate fraud, including the USGCRP and rebuild the energy infrastructure of the US based on inexpensive, reliable fossil fueled and nuclear electrical power.




NINE PAPERS THAT REVEAL THE EQUILIBRIUM CLIMATE MODELING FRAUD


Roy Clark PhD


Ventura Photonics Climate Post 14, VPCP 014.1


Link to Full Post: VPCP 014.1





Summary


The climate modeling fraud can be understood by examining nine ‘scientific’ papers, starting with five on climate modeling: Arrhenius [1896], Manabe and Wetherald [1967], [1975], Hansen et al [1981] and Knutti and Hegerl [2008]. Climate sensitivity is described by Gregory et al [2020] and Otto et al [2013]. Radiative forcing is explained by Ramaswamy et al [2019]. The process of climate attribution is described by Herring et al [2022]. Further details may be found in the IPCC climate assessment reports. Chapter 7 of the Working Group 1 Report ‘The Earth’s energy budget, climate feedbacks, and climate sensitivity’ in AR6 provides a convenient starting point for further investigation into the pseudoscience of radiative forcings, feedbacks and climate sensitivity [IPCC, 2021].



A REVIEW OF HANSEN et al, 1993

“HOW SENSITIVE IS THE WORLD’S CLIMATE?”


Roy Clark PhD


Ventura Photonics Climate Post 15, VPCP 0015.1


Link to Full Post: VPCP 015.1





Summary


The 1993 paper by Hansen et al “How sensitive is the earth’s climate?” National Geographic Research and Exploration (1993) 9(2) pp. 142-158 (H93) provides a convenient description of the status of climate modeling 5 years after the United Nations Intergovernmental Panel on Climate Change (IPCC) was formed. It contains the same pseudoscientific nonsense about radiative forcings, feedbacks and a climate sensitivity to CO2 that is contained in the 2021 IPCC climate assessment. Little has changed since this paper was published in 1993, except that the computer technology has improved significantly and the climate models have become larger and more complex. The short answer to the title of H93 is that any climate sensitivity to CO2 is ‘too small to measure’.



THE RADIATION BALANCE OF THE EARTH


Roy Clark PhD


Ventura Photonics Climate Post 16, VPCP 0016.1


Link to Full Post: VPCP 016.1





Summary


The Earth is an isolated planet that is heated by the absorption of short wave radiation from the sun and cools by the emission of long wave IR (LWIR) radiation back to space. However, there is no requirement for an exact flux balance between the absorbed solar insolation and the outgoing longwave radiation (OLR). The earth’s orbit is slightly elliptical, so the total solar insolation (TSI) is 1362 ±45 W m-2. The peak flux at perihelion occurs in early January. Conservation of energy for a stable climate requires that the long term planetary average OLR flux be near 240 W m-2. However, the short term variations are approximately ±100 W m-2. There are also significant seasonal variations in the distribution of the net flux balance. Close to equinox, there is a wide band that extends approximately 35° in latitude either side of the equator where the absorbed solar flux exceeds the OLR. The location of this band shifts north and then south towards the poles at summer solstice in each hemisphere. Outside of this band, the OLR exceeds the absorbed solar flux.


The planetary average OLR is often converted to an ‘effective emission temperature’ of 255 K using the Stefan Boltzmann Law. This just the temperature of a blackbody surface emitting a total LWIR flux near 240 W m-2. The spectral distribution of the OLR flux is not that of a blackbody emitter so it does not define a temperature. The ‘effective emission temperature’ is then combined with an ‘average surface temperature’ of 288 K to give a ‘greenhouse effect temperature’ of 33 K. This is the pseudoscientific warming produced by the ‘greenhouse effect’. The OLR is simply a cumulative cooling flux that is produced by the net upward LWIR emission from many different levels in the atmosphere. The emission from each level is modified by the absorption and emission of the levels above. In order to understand the atmospheric heat transfer, the net LWIR flux at each level has to be converted to a cooling rate. In the tropics, the tropospheric cooling rate is near 2 K per day. The concept of an average planetary energy balance using just 3 numbers has no useful meaning.


The earth is a rotating water planet that has an atmosphere with an IR radiation field. The troposphere functions as an open cycle heat engine that transports part of the absorbed solar heat from the surface to the middle and upper troposphere by moist convection. The surface is 71% ocean and the ocean surface is almost transparent to the solar flux. Approximately 90% of the solar flux is initially absorbed within the first 10 m layer of the ocean and this heat is distributed by convection and wave action within a thermal layer that may reach 100 m depth or more. The large heat capacity of this ocean layer stabilizes the earth’s climate. The diurnal surface temperature change is typically 1 °C or less. The seasonal change is generally near 6 °C or less. The ocean temperature changes with latitude from approximately 30 °C in the equatorial warm pools to -1.8 °C at higher latitudes when seawater starts to freeze.


At the surface, the downward LWIR flux from the lower troposphere to the surface establishes an exchange energy with the upward LWIR flux from the surface. This reduces the net LWIR cooling flux that can be emitted by the surface. In order to dissipate the excess absorbed solar heat, the surface warms up so that the heat is removed by moist convection. The energy transfer processes at the ocean-air and land-air interfaces are different and have to be analyzed separately. Over the oceans, the bulk ocean temperature increases until the excess surface heat is removed by wind driven evaporation. There is no requirement for an exact flux balance between the solar heating and the cooling of the oceans. This produces characteristic, quasi-periodic oscillations in ocean surface temperature that have major effects on the earth’s climate. These oscillations are part of the ocean gyre circulation system. This provides a natural ‘noise floor’ for the surface temperature. Since there is no exact energy balance at the surface, there is no reason to expect an exact energy balance at the top of the atmosphere.


As the warm air rises through the troposphere it expands and cools. For dry air, the lapse rate, or change in temperature with altitude, is -9.8 K km-1. For moist air above the saturation level, water condenses to form clouds with the release of latent heat. This reduces the lapse rate. The US standard atmosphere uses an average lapse rate of -6.5 K km-1. Convection is a mass transport process that is coupled to both the gravitational potential and the rotation or angular momentum of the earth. This leads to the formation of the Hadley, Ferrel and polar cell convective structure, the trade winds, the mid latitude cyclones/anticyclones and the ocean gyre circulation.


Because of molecular line broadening effects, the troposphere splits naturally into two thermal reservoirs. The lower thermal reservoir extends to 2 km in altitude and is the source of almost all of the downward LWIR flux to the surface within the main absorption emission bands. The upper thermal reservoir extends from 2 km up to the tropopause. This functions as the cold reservoir of the heat engine. The molecular line broadening effects decouple the downward LWIR flux from the atmospheric emission contribution to the OLR. As the air rises through the troposphere, internal molecular energy is converted to gravitational potential energy. The air continues to cool by net LWIR emission, mainly from the water bands. As the air cools, the density increases, the air sinks and the gravitational potential energy is converted back to heat.


The OLR consists of four main cooling channels, the emission from the water bands in the middle to upper troposphere, the emission from the CO2 bands in the stratosphere, cloud top emission and surface emission through the LWIR transmission window. There are also smaller contributions to the OLR from other greenhouse gases including methane, CH4, ozone, O3 and nitrous oxide, N2O. The IR radiation field in the atmosphere can be calculated to high accuracy using radiative transfer algorithms and the HITRAN database. When the atmospheric concentration of a greenhouse gas is increased, there is an initial wavelength specific decrease in the OLR. In particular, for a ‘doubling’ of the atmospheric CO2 concentration from 280 to 560 ppm there is decrease in the LWIR flux of approximately 3.7 W m-2 emitted mainly by the P and R branches of the CO2 emission band near 640 and 700 cm-1. There is also a slight decrease in the weaker emission by the CO2 overtone bands near 950 and 1050 cm-1. This change in flux at TOA is called a ‘radiative forcing’. It is assumed that that this perturbs the ‘radiation balance’ of the earth. The surface temperature is then supposed to ‘adjust’ to this perturbation with an increase that restores the LWIR flux at TOA. This is pseudoscientific nonsense. There is another required step in the radiative transfer analysis that has been ignored.


The ‘radiative forcing’ does not magically appear at TOA. It is produced by absorption at lower levels in the atmosphere. The small amount of heat coupled to each level in the atmosphere has to be converted to a change in the rate of cooling. Once this is done then the maximum decrease in cooling rate in the troposphere is +0.08 K per day for a ‘doubling’ of the CO2 concentration. At an average lapse rate of -6.5 K km-1, an increase in temperature of 0.08 K requires a decrease in altitude near 12 meters. This is equivalent to riding an elevator down 4 floors. Such small changes in temperature are too small to measure.


In addition to the decrease in LWIR flux at TOA produced by a ‘CO2 doubling’, there is a similar increase in the downward LWIR flux to the surface. Over the oceans, the penetration depth of the LWIR flux is less than 100 micron (0.004 inches). Here it is fully coupled to the wind driven evaporation or latent heat flux. Within the ±30° latitude bands, the sensitivity of the latent heat flux to the wind speed is at least 15 W m-2/m s-1. The entire 4 W m-2 produced by a ‘CO2 doubling’ is dissipated by an increase in wind speed of approximately 27 centimeters per second. At present the CO2 concentration has increased by 140 ppm and the increase in downward flux at the surface is near 2 W m-2. This is dissipated by an increase in wind speed of approximately 13 centimeters per second. An increase in the downward LWIR flux to the surface from a ‘greenhouse gas forcing’ cannot produce a measurable increase in temperature at the ocean surface. Similarly, over land, the day to day variations in the diurnal convection transition temperature are much larger than any temperature increase that can be produced by the observed increase in atmospheric CO2 concentration.


The radiation balance of the earth has to be analyzed as a series of dynamically coupled thermal reservoirs. The various flux terms are interactively coupled to these reservoirs that include the oceans, the land, the lower and the upper troposphere and the stratosphere. In addition, heat is converted to gravitational potential energy during atmospheric convection. There is no thermal equilibrium so the heat transfer has be analyzed using rates of heating and cooling. The concept of a radiation budget using just 3 numbers has no physical meaning. Furthermore, the initial wavelength specific decrease in LWIR flux (or radiative forcing) produced by an increase in the atmospheric concentration of various greenhouse gases does not alter the energy balance of the earth in a way that can change the surface temperature. The small amount of additional heat coupled to the troposphere is dissipated by wideband LWIR emission and changes in gravitational potential energy. The climate modeling description of climate change using radiative forcings, feedbacks and climate sensitivity is pseudoscientific nonsense.