The Temperature Forecasting Record Of The IPCC

  • Date: 09/06/14
  • Euan Mearns, Energy Matters

In geology we use computer models to simulate complex processes. A good example would be 4D simulation of fluid flow in oil and gas reservoirs. These reservoir models are likely every bit as complex as computer simulations of Earth’s atmosphere. An important part of the modelling process is to compare model realisations with what actually comes to pass after oil or gas production has begun. It is called history matching. At the outset, the models are always wrong but as more data is gathered they are updated and refined to the point that they have skill in hind casting what just happened and forecasting what the future holds. This informs the commercial decision making process.

The IPCC (Intergovernmental Panel on Climate Change) has now published 5 major reports, the First Assessment Report (FAR) in 1990. This provides an opportunity to examine what has been forecast with what has come to pass. Examining past reports is quite enlightening since it reveals what the IPCC has learned in the last 24 years. I conclude that nothing has been learned other than how to obfuscate, mislead and deceive.

Figure 1 Temperature forecasts from the FAR (1990). Is this the best forecast the IPCC has ever made? It is clearly stated in the caption that each model uses the same emissions scenario. Hence the differences between Low, Best and High estimates are down to different physical assumptions such as climate sensitivity to CO2. Holding the key variable constant (CO2 emissions trajectory) allows the reader to see how different scientific judgements play out. This is the correct way to do this. All models are initiated in 1850 and by the year 2000 already display significant divergence. This is what should happen. So how does this compare to what came to pass and with subsequent IPCC practice?

I am aware that many others will have carried out this exercise before and in a much more sophisticated way than I do here. The best example I am aware of was done by Roy Spencer [1] who produced this splendid chart that also drew some criticism.

Figure 2 Comparison of multiple IPCC models with reality compiled by Roy Spencer. The fact that reality tracks along the low boundary of the models has been made many times by IPCC sceptics. The only scientists that this reality appears to have escaped are those attached to the IPCC.

My approach is much more simple and crude. I have simply cut and pasted IPCC graphics into XL charts where I compare the IPCC forecasts with the HadCRUT4 temperature reconstructions. As we shall see, the IPCC have an extraordinary lax approach to temperature datums and in each example a different adjustment has to be made to HadCRUT4 to make it comparable with the IPCC framework.

Figure 3 Comparison of the FAR (1990) temperature forecasts with HadCRUT4. HadCRUT4 data was downloaded from WoodForTrees [2] and annual averages calculated.

Figure 3 shows how the temperature forecasts from the FAR (1990) [3] compare with reality. It should be quite clear that the best model is the Low Model. I cannot easily find the parameters used to define the Low, Best and High models but the report states that a range of climate sensitivities from 1.5 to 4.5˚C are used. It should be abundantly clear that the Low model is the one that lies closest to the reality of HadCRUT4. The High model is already running about 1.2˚C too warm in 2013.

Figure 4 The TAR (2001) introduced the hockey stick. The observed temperature record is spliced onto the proxy record and the model record is spliced onto the observed record and no opportunity to examine the veracity of the models is offered. But 13 years have since past and we can see how reality compares with the models in that very short time period.

I could not find a summary of the Second Assessment Report (SAR) from 1994 and so jump to the TAR (third assessment report) from 2001 [4]. This was the year (I believe) that the hockey stick was born (Figure 4). In the imaginary world of the IPCC, Northern Hemisphere temperatures were constant from 1000 to 1900 AD with not the faintest trace of Medieval Warm Period or Little Ice Age where real people either prospered or died by the million. The actual temperature record is spliced onto the proxy record and the model world is spliced onto that to create a picture of future temperature catastrophe. So how does this compare with reality?

Figure 5 From 1850 to 2001 the IPCC background image is plotting observations (not model output) that agree with the HadCRUT4 observations. Well done IPCC! The detail of what has happened since 2001 is shown in Figure 6. To have any value or meaning all of the models should have been initiated in 1850. We would then see that the majority are running far too hot by 2001.

Figure 5 shows how HadCRUT4 compares with the model world. The fit from 1850 to 2001 is excellent.  That is because the background image is simply plotting observations in this period. I have nevertheless had to subtract 0.6˚C from HadCRUT4 to get it to match the observations while a decade earlier I had to add 0.5˚C. The 250 year x-axis scale makes it difficult to see how models initiated in 2001 now compare with 13 years of observations since. Figure 6 shows a blow up of the detail.

Figure 6 The single vertical grid line is the year 2000. The blue line is HadCRUT4 (reality) moving sideways while all of the models are moving up.

The detailed excerpt illustrates the nature of the problem in evaluating IPCC models. While real world temperatures have moved sideways since about 1997 and all the model trends are clearly going up, there is really not enough time to evaluate the models properly. To be scientifically valid the models should have been run from 1850, as before (Figure 1), but they have not. Had they been, by 2001 they would have been widely divergent (as 1990) and it would be easy to pick the winners. But they are brought together conveniently by initiating the models at around the year 2000. Scientifically this is bad practice.

Full story