It cannot reasonably be denied that the main continuous data series used by the IPCC to evaluate global mean surface temperature showed an approximately two-decade "pause" or "hiatus" starting around 2000, which deviated from the predictions of all the many general circulation models (more than 100 simulations from 41 climate models) that were calibrated from the same data series on data prior to 2000.
E.g., see IPCC itself; and Fyfe et al., Nature Climate Change 6, 224–228 (2016). doi:10.1038/nclimate2938
Those who would deny this plain historic fact should be discounted as climate trolls.
The question arises: How did this "pause" occur?
This is my best educated answer:
I believe the “pause” arose because the data could not continue to be manipulated to sustain such a large fabricated “warming” as in the previous decades of data manipulation.
Call me a crank but I am not the only scientist to have concluded that a global mean surface temperature cannot reliably be calculated from measured data with enough precision to confirm or infirm warmings or coolings of fractions of a degree (Celsius) on the modern observation timescale.
In my 2007 essay, I (in part) put it this way:
"It was no easy task to arrive at the most cited original estimated rate of increase of the mean global surface temperature of 0.5 C in 100 years. As with any evaluation of a global spatio-temporal average, it involved elaborate and unreliable grid size dependent averages. In addition, it involved removal of outlying data, complex corrections for historical differences in measurement methods, measurement distributions, and measurement frequencies, and complex normalisations of different data sets – for example, land based and sea based measurements and the use of different temperature proxies that are in turn dependent on approximate calibration models. Even for modern thermometer readings in a given year, the very real problem of defining a robust and useful global spatio-temporal average Earth-surface temperature is not solved, and is itself an active area of research.
This means that determining an average of a quantity (Earth surface temperature) that is everywhere different and continuously changing with time at every point, using measurements at discrete times and places (weather stations), is virtually impossible; in that the resulting number is highly sensitive to the chosen extrapolation method(s) needed to calculate (or rather approximate) the average.
Averaging problems aside, many tenuous approximations must be made in order to arrive at any of the reported final global average temperature curves. For example, air temperature thermometers on ocean-going ships have been positioned at increasing heights as the sizes of ships have increased in recent history. Since temperature decreases with increasing altitude, this altitude effect must be corrected. The estimates are uncertain and can change the calculated global warming by as much as 0.5 C, thereby removing the originally reported effect entirely.
Similarly, surface ocean temperatures were first measured by drawing water up to the ship decks in cloth buckets and later in wooden buckets. Such buckets allow heat exchange in different amounts, thereby changing the measured temperature. This must be corrected by various estimates of sizes and types of buckets. These estimates are uncertain and can again change the resulting final calculated global warming value by an amount comparable to the 0.5 C value. There are a dozen or so similar corrections that must be applied, each one able to significantly alter the outcome.
In wanting to go further back in time, the technical problems are magnified. [...]"
Several other scientist have stressed these same difficulties. The algorithms to correct known systematic effects (such as heat island effects, etc.) are guesstimates and the implementation choices affect the result... I don't believe the GIS and spread-sheet magic.
I know what it is like measuring temperatures in a physics lab, and I know data analysis. There is no way geographers and data massage artists can take this kind of physical-measurement data and produce a meaningful global spatio-temporal mean with the claimed precision.
Such graphs should not even have been allowed to be published without complete transparency, from raw data through every algorythmic step to the final result. Any scientist should be able to see and reproduce every single step and know every assumption and manipulation and exclusion. Without this care, it is black-box magic, not testable science. The entire field has failed the test.
There you have it. Many have now argued that the "pause" is not "real". Well, I agree, as I have explained above. They argue that something else is real. I say none of it is real. None of it can even independently be verified.
1 comment:
Estimation of the world temperature is just that, an estimation based on a model that takes as input a set of measurements, transforms them and out pops a number that is then called the average temperature.
I suspect that model to be just as flawed as any of the other models in play.
Post a Comment