Global Warming Pause Real, But A Natural Blip, New Study Claims

GWPF | 23 July 2015

Global Warming Pause and How To Lie With Statistics


Comparing new NOAA annual global surface temperature data to NASA Giss and HadCRUT4 data –  graph: Whitehouse 2015.

A slow-down in global warming is not a sign that climate change is ending, university researchers have found. The phenomenon is a natural blip in an otherwise long-term upwards trend, their research shows. In a detailed study of more than 200 years’ worth of temperature data, results backed previous findings that short-term pauses in climate change are simply the result of natural variation. The findings support the likelihood that a current hiatus in the world’s year-on-year temperature increases – which have stalled since 1998 – is temporary. —Reporting Climate Science, 20 July 2015

I can reveal that the US House of Representatives science committee, led by the Texas Republican Lamar Smith, also has doubts. At the end of last month, committee staff sent emails to several experts in Britain, saying Mr Smith ‘is making climate change data within NOAA a priority’. The committee, they added, was seeking outside help to ‘analyse’ NOAA’s claims – apparently, it would seem, because some members do not trust NOAA’s ‘input’ alone. Among those submitting evidence that challenges NOAA’s assertion is the UK sceptic think-tank, which is chaired by Lord Lawson, the Global Warming Policy Foundation. –David Rose, The Spectator, 22 July 2015

1) Global Warming Pause Real (But It’s A Natural Blip), New Study Claims – Reporting Climate Science, 20 July 2015

2) David Rose: Was The Global Warming Pause A Myth? –
The Spectator, 22 July 2015

3) David Whitehouse: Why Karl et al 2015 Doesn’t Eliminate The ‘Hiatus’ –
Global Warming Policy Forum, 22 July 2015

4) Gordon Hughes: What Is Meant By A “Significant” Trend? –
Global Warming Policy Forum, 22 July 2015

Even accepting the statistical approach taken by Karl et al. it is clear that their errors are larger than they realise, and that the trends they obtain depend upon cherry-picked start and end points that include abnormal conditions, i.e. the 1998-2000 El Nino/La Nina and the 2014 northeast Pacific Ocean “hot spot.” I conclude that the elimination of the hiatus claimed by Karl et al 2015 is unsafe because of bias due to the choice of start and end points that are extremes of natural fluctuations in the global surface temperature record, as well as a overemphasis on statistically poor results. –David Whitehouse, Global Warming Policy Forum, 22 July 2015

The lesson is that no study should rely upon trends over selected short periods of time to make claims about a series with as much variability over time as global temperatures. That is as true for the relatively large increase from 1976 to 1998 as for the more recent period. Even that trend has been exceeded in 10% of all 23-year periods since 1880. Even if the study had not drastically underestimated the amount of variability in 17-year trends in the historical data, there is another problem that is not addressed. This is: what is or was the starting point of the trend?  In the spirit of the classic warning to all statisticians – Darrell Huff’s book titled ‘How to Lie with Statistics’ – it is possible to use a particular set of data to generate a wide range of trends simply by choosing a suitable starting point. –Gordon Hughes, Global Warming Policy Forum, 22 July 2015

1) Global Warming Pause Real, But A Natural Blip, New Study Claims
Reporting Climate Science, 20 July 2015

A slow-down in global warming is not a sign that climate change is ending, university researchers have found. The phenomenon is a natural blip in an otherwise long-term upwards trend, their research shows.

In a detailed study of more than 200 years’ worth of temperature data, results backed previous findings that short-term pauses in climate change are simply the result of natural variation.

The findings support the likelihood that a current hiatus in the world’s year-on-year temperature increases – which have stalled since 1998 – is temporary.

Scientists from the University of Edinburgh analysed real-world historic climate records from 1782 to 2000, comparing them with computerised climate models for the same timescale.

They were able to separate the influence on climate trends of man-made warming – such as from greenhouse gas emissions – and of natural influences in temperature – such as periods of intense sunlight or volcanic activity.

This showed that random variations can cause short term interruptions to climate patterns in the form of a pause or surge in warming, in both the real data and in the models, typically lasting up to a decade.

Extreme natural forces, such as strong volcanic eruptions, were shown to disrupt climate trends for decades.

The research highlights the impact of volcanic eruptions on climate, when particles produced can reflect sunlight from Earth, causing long-lasting cooling.

The eruption of Mount Tambora in Indonesia in 1815 was among the biggest in recent times, causing a so-called year without summer.

Scientists estimate that, if it occurred today, it would cause a 20-year climate hiatus.

Full story

2) David Rose: Was The Global Warming Pause A Myth?
The Spectator, 22 July 2015

David Rose

Is the world still getting warmer? If so, how fast? Has there been a global warming ‘pause’ or ‘hiatus’ or not? Or has the recent warming rate been as fast, or even faster, than that measured in the 1990s?

Spec_Sick_World-dan-modified
These are fundamental questions. They depend not on the complex computer models on which scientists base their projections of the future, but on simple measurement, on readings from thermometers sited in thousands of locations across the world, on land, on buoys in the oceans and in balloons and satellites.

Yet their answers are far from simple, and like many other areas of critical importance in climate science, subject to uncertainty and fierce debate – not that you would know this if you relied on almost all media reports.

Last month, the academic journal Science, the prestigious monthly organ published by the American Association for the Advancement of Science, published the first of two recent contributions to this discussion. The work of a team led by Thomas Karl from the National Oceanic and Atmospheric Administration (NOAA), it claimed, in the words of the headline on NOAA’s press release: ‘Data show no recent slowdown in global warming.’

This was a biggish deal. The hiatus, which until now scientists have assumed to have existed since around 1998, has been a significant weapon in the armoury of climate change sceptics, because it was not predicted by the vaunted computer models. The level of carbon dioxide in the atmosphere has continued to rise unabated, but average world temperatures have apparently either ceased to rise, or (depending on which of the several sets of temperature data one chooses), risen much more slowly than they did before.

A look back at the 2007 Fourth Assessment Report of the UN’s Intergovernmental Panel on Climate Change illustrates the point. It stated that in the near term, temperatures could be expected to rise by 0.2C per decade. The Fifth Assessment Report six years later admitted this has not been happening: since 1998, it revealed, the increase, at 0.05C per decade, has only been a quarter as great.

However, the accepted margin of error for aggregating readings from all those thermometers is about 0.1C. In other words, the measured rise lately has been only half as big as the error range. Balloon and satellite measurements from the upper atmosphere show no increase at all. Hence one can reasonably argue, as climate sceptics do, that there has been no statistically significant rising trend for well over a decade. Even the IPCC thought this important: the Fifth Report devoted many pages to the hiatus, and to the competing explanations as to why it may have occurred.

Except that NOAA now says it didn’t. According to Karl et al, the pause was merely an ‘artefact of the data,’ which disappeared once they made certain technical changes, mainly to the way they measured and ‘adjusted’ readings from the seas. These changes have the effect of revising the values from earlier years downwards, and increasing them since 2008.

Bingo! ‘The rate of global warming during the last 15 years has been as fast as or faster than that seen during the latter half of the 20th Century,’ says NOAA. ‘The study refutes the notion that there has been a slowdown or “hiatus” in the rate of global warming in recent years.’

Predictably enough, the new paper was widely and enthusiastically reported by media around the world –shorn of the several caveats that Karl et al included with their work. At last, here was proof that the sceptics and ‘deniers’ were wrong. The BBC website set the tone with its headline: ‘US Scientists: Global Warming Pause No Longer Valid.’ According to the Guardian’s John Abraham, the Karl paper should ‘end the discussion of the so-called pause, which never existed in the first place.’

Oddly enough, in a field where one is told that the science is ‘settled,’ there has been disagreement from several eminent scientists. Dr Ed Hawkins, a principal research fellow at Reading University, who no one could ever call a sceptic, wrote in his blog that even if one accepts NOAA’s data revisions, ‘there has clearly been a slowdown in the rate of warming when compared to other periods’.

Another one not buying the Karl paper’s message was Prof. Judith Curry of Georgia Tech. ‘Uncertainties in global surface temperature anomalies are substantially understated,’ she wrote. ‘This short paper is not adequate to explain the very large changes that have been made to the NOAA data set… while I’m sure this latest analysis from NOAA will be regarded as politically useful for the Obama administration [which is currently bent on using executive action to set unilateral emissions limits against the will of Congress], I don’t regard it as a particularly useful contribution to our scientific understanding of what is going on.’

I can reveal that the US House of Representatives science committee, led by the Texas Republican Lamar Smith, also has doubts. At the end of last month, committee staff sent emails to several experts in Britain, saying Mr Smith ‘is making climate change data within NOAA a priority’. The committee, they added, was seeking outside help to ‘analyse’ NOAA’s claims – apparently, it would seem, because some members do not trust NOAA’s ‘input’ alone.

A committee aide told me: ‘NOAA released a conclusion it claims is based on scientific analysis. It has provided the Committee with documents to show their methodology and we’re seeking to confirm that their conclusions are accurate.’

Among those submitting evidence that challenges NOAA’s assertion is the UK sceptic think-tank, which is chaired by Lord Lawson, the Global Warming Policy Foundation. In a study published today by the GWPF, Dr David Whitehouse has analysed the raw temperature measurements behind the NOAA paper’s claim.

His key finding is that the difference between the old and revised data is ‘much smaller’ than the margin of error which NOAA admits affects all its temperature readings.

Moreover, Dr Whitehouse says, NOAA exaggerated its supposed recent warming trend by cherry-picking its start and end dates, choosing 2000, an unusually cold year, as its starting point, and 2014, a very warm one, as its end. Therefore, he writes, there is ‘no robust evidence that the hiatus does not exist’.

The graph shown here, produced by Dr Whitehouse, shows how wide those error margins are. Each data point is a revised NOAA world average annual temperature – with the error bars added. (The temperatures shown are measured in thousandths of a degree above 14C.)

Karl2015

The same GWPF pamphlet contains an analysis by Gordon Hughes, Professor of Economics at Edinburgh. He finds that climate scientists have for years been ignoring statistical techniques designed to weed out random ‘noise’. These techniques have long been accepted as standard tools by researchers in fields such as econometrics and epidemiology, and Karl et al’s failure to deploy them means ‘the paper does not provide well-founded statistical evidence to draw any reliable conclusions about the rate at which global temperatures have been increasing’.

However, the most striking challenge to the paper comes from unexpected source – the July issue of the same journal that published Karl, Science. A new paper by Dr Veronica Nieves of the California Institute of Technology finds that the pause is real after all. Crucially, Nieves used NOAA’s own data – but drew very different conclusions. Needless to say, this paper has received no mainstream media publicity at all. The article you are reading now is its first mention outside the specialist literature and a handful of climate blogs. But in Prof Curry’s view, it shows ‘the hiatus lives’.

Full post

3) David Whitehouse: Why Karl et al 2015 Doesn’t Eliminate The ‘Hiatus’
Global Warming Policy Forum, 22 July 2015

Why Karl et al 2015 Doesn’t Eliminate The ‘Hiatus’
Dr David Whitehouse, Global Warming Policy Forum

Even accepting the statistical approach taken by Karl et al it is clear that their errors are larger than they realise, and that the trends they obtain depend upon cherry-picked start and end points that include abnormal conditions, i.e. the 1998-2000 El Nino/La Nina and the 2014 northeast Pacific Ocean “hot spot.”

When estimating trends, especially for such short periods in a noisy data set such as global surface temperatures, care must be taken with start and end points as they can affect the trend obtained.

Fig 1 shows the difference between the new NOAA data and the currently used NOAA data.

The differences between the two datasets are small. Prior to 2008 the new data was cooler than the existing set, after 2008 it was warmer. The variations are much smaller than the errors which NOAA says are are +/- 0.09°C.

Comparing the new and current NOAA annual data to the NASA Giss and HadCRUT4 global surface datasets is done in Fig 2. An offset of +0.1°C has been added to HadCRUT4 to make it more easily comparable to the others (in this analysis we are interested in gradients not absolute values). HadCRUT4 errors are +/- 0.1°C and NASA Giss is +/- 0.05°C, as quoted by them.

 

Does the inclusion of the new NOAA data makes a difference to the “hiatus” reported in the other three datasets?

I follow the approach adopted by Karl et al in considering only data between 1998 – 2014 in this particular analysis. To quantify the range of trends that would be expected to be due to chance with the new NOAA data I considered a time series of 17-years with the statistical properties of NOAA data. I performed a Monte Carlo analysis involving 10,000 simulations of random data. My result indicates that the trends reported by Karl et al 2015 – which were only ever marginally significant at the 10% level – are much less significant. Comparing their trends – 0.086°C per decade for 1998-2012, 0.106°C per decade for 1998-2014 and 0.116°C per decade for 2000-2014 – with the outcome of the Monte Carlo simulation revealed positive trends between 0.08-0.12°C per decade 1,133 times out of the 10,000 simulations. We conclude that, irrespective of their quoted small errors in their trends, none of them are robust or provide evidence that the “hiatus” does not exist.

Even if the errors of the trends quoted by Karl et al 2015 are accepted their conclusion that they remove the “hiatus” is incorrect for another reason. The effect on their trends of their start and end points explains the differences they obtain. Their highest trend was between 2000-2014 when the start point was a cool La Nina year and the endpoint influenced by the recent anomalously warm temperature of the northeast Pacific. Terminating the data two years earlier in 2012 reduces the influence of the northeast Pacific and consequently reduces the trend. Commencing their analysis in 1998 (a warm El Nino year) produces a trend that is smaller than 2000-2014 because of the warmer starting point, as expected. Similarly the 1998-2012 trend is significantly smaller than the 1998-2014 trend, again due to the influence of recent warm seas on the northeast Pacific.

This table gives my analysis of the new NOAA temperature data set compiled by Karl et al with my error estimates.

I conclude that the elimination of the hiatus claimed by Karl et al 2015 is unsafe because of bias due to the choice of start and end points that are extremes of natural fluctuations in the global surface temperature record, as well as a overemphasis on statistically poor results.

4) Gordon Hughes: What Is Meant By A “Significant” Trend?
Global Warming Policy Forum, 22 July 2015

What is meant by a “significant” trend?
How should we interpret evidence on whether there has been a hiatus in global surface warming? 
Professor Gordon Hughes, University of Edinburgh

Studies in medicine, social sciences and other disciplines tend to be full of claims that some observation is “statistically significant” with associated statements about certainty expressed as probabilities or p-values. These claims are based on testing procedures derived from classical statistics when applied to experimental data that has been collected and analysed in a particular way. Unfortunately, all too often the key assumptions bear little resemblance to the analysis that has actually been carried out.

The classical framework of testing may be illustrated by considering an experiment to determine how, say, wheat responds to the application of a herbicide designed to kill weeds. Multiple small plots in a variety of locations are assigned randomly to different treatments including no application of the herbicide and applications varying from, say, 0.2 to 5 times a standard dose. At the end of the experiment the weight of wheat grains collected from each plot are recorded. Some pre-defined statistical tests are carried out to determine whether there is a linear or S-shaped relationship between the amount of herbicide applied and the plot yield.  Due to the experimental design, factors which affect wheat yield – rainfall, temperatures, soil fertility, insect pests – are assumed to vary randomly across plots, while the measurement of the outcome is specified in advance and cannot be altered.

Using the data that has been collected we estimate a parameter β that defines the shape or slope of the response of wheat yield to the amount of herbicide applied where the value zero means no response and values greater than zero means that wheat yield increases with herbicide application, though not necessarily in a linear fashion.  Taking account of the variability in wheat yields across plots our statistical analysis concludes that the central estimate of β is 0.5 with a 90% confidence range of 0.3 to 0.7.  The idea is that if the same experiment were to be repeated independently 100 times then we would expect to obtain a value of β that lies outside this range in only 10 experiments.

Few experiments correspond to this idealised description but even if they do, the claim about confidence intervals that is made may be quite wrong.  The problem is that from one experiment we do not know what the “true” level of variability in herbicide response across the full range of locations where wheat might be grown in the UK or Europe.  We assume that the variability – as measured by the standard error of the parameter β – is an unbiased estimate of the “true” variability, but without actually doing 100 or more experiments we cannot be sure of that.  Indeed there are good reasons why the assumption may be wrong.  The experiment may have been carried out in a season with low average rainfall or late frosts – i.e. we may have failed to randomise over all variables that affect the outcome.  So, any conclusion about the statistical significance of a parameter depends critically on whether the study has genuinely identified all of the sources of variability that might affect the observations.

The study by Karl et al (Science Express, 4 June 2015) appears in a completely different light when scrutinised in this way. It claims that for the 17 years from 1998 to 2014 their new data produces a trend increase in global temperature of 0.106°C per decade with a 90% confidence range of 0.048 to 0.164°C per decade.  Cross-checks show that the confidence range is calculated solely by using the variability in the period from 1998-2014.  But this does not accurately reflect the variability in their data for the full period from 1880 to 2014. To demonstrate the point, the trend increase in global temperature can be computed for every 17-year period between 1880 and 2014 using the method followed by Karl et al. This gives us the actual variability over all 17-year periods in the data, not an estimate based on a single period. It turns out that the actual variability is more than 3 times the Karl et al estimate. This analysis also shows that the distribution of 17-year trends is negatively skewed (the mean is much lower than the median), so that the empirical confidence range goes further into negative values for the trend than conventional calculations would suggest.

It is possible that there has been some change in the underlying variability of temperature has changed since the middle of the 20th century.  Karl et al report trends for periods from 1950 and 1951, so the same exercise was repeated for all 17-year periods from 1950. The variability of these trends is, indeed, lower than for the full period but it is still 2.4 times the estimate of variability based on the single period 1998-2014. In fact, based on an analysis of 17-year periods since 1950 one cannot rule out the possibility of no trend in temperatures since the mean trend is 0.126°C per decade with a 90% confidence range of -0.017 to +0.217°C per decade.

Figure 1 – Estimates and confidence intervals of trend increases in temperature

The results of using the historical variability of the temperature data rather than variability estimated for relatively short periods is shown in Figure 1. The estimated values are shown as hatched bars while the 90% confidence intervals are given by the vertical lines. This demonstrates that the claim that the trend increase from 1998 to 2014 was “significant” rests on an erroneous estimate of the actual variability in estimates of the trend. Indeed, even the large trend increase from 1951 to 2012 has a much wider confidence range when based on the variability of all 62 year periods since 1880.

It is important to be clear about the limitations of this kind of analysis. The global temperature has increased since 1880. It is probable but far from certain that the trend rate of increase accelerated after 1950. However, given the variability in the trends estimated for relatively short periods, the hypothesis that there was a hiatus after 1998 cannot be rejected using the Karl et al data. In fact, based on the full data series one would expect that a trend increase of at least 0.1°C per decade would be observed in about 15% of all 17-year periods examined, even if the underlying trend in global temperatures is zero.

The lesson is that no study should rely upon trends over selected short periods of time to make claims about a series with as much variability over time as global temperatures. That is as true for the relatively large increase from 1976 to 1998 as for the more recent period. Even that trend has been exceeded in 10% of all 23-year periods since 1880.

Even if the study had not drastically underestimated the amount of variability in 17-year trends in the historical data, there is another problem that is not addressed. This is: what is or was the starting point of the trend?  In the spirit of the classic warning to all statisticians – Darrell Huff’s book titled ‘How to Lie with Statistics’ – it is possible to use a particular set of data to generate a wide range of trends simply by choosing a suitable starting point.

Full paper

Leave a Reply