There has been much speculation about the Harry_read_me file and a lot of sympathy for ‘Harry’ from Techies. The consensus (if I dare to use that word in relation to climate stuff) is that he was handed a mess to sort out and that his comments are revealing about the whole state of the HadCrut model. I still think it goes further.
Like me FrancisT at L’Ombre de l’Olivier had picked up on the frequent references to ‘synthetic data’.
“This is not necessarily evil fakery (despite some more hysterical claims that I’ve read) it’s a way to interpolate data that is missing for one reason or another. There are two problems here, the first is that the generation of the “synthetic” data is a black art and one that occured once many years ago and which is now lost (maybe) and the second of which is the decision about when to use the synthetic data.
The latter problem is simple to explain: there are numerous cases where there is partial data (Tmin say bit not Tmax for a station for a time period) or data that looks questionable (is it massive rainfall or a transcription error?) where the choice of whether to use a synthetic or not is not so clear cut.
The first problem is potentially more worrying in that they seem to be based on historical climate models – and old ones at that. Since the models tend to have been written and tested against older versions of the HADCRU (and other) historical series there is a feedback loop which appears to lead to a potential problem of confirmation bias. I’m not sure that it has done so – and we don’t I think have the data to see one way or another – but if there is an “evil” bit of HADCRU then this positive feedback loop is probably where it is. Any other errors introduced are, it seems to me, not ascribable to malice but rather very definitely caused by some combination of ignorance, incompetence, carelessness etc.”
FrancisT is quite right, however that still leaves us wanting answers to lots of questions. The problem with climate science is that fantasy got in the way of producing a valid model. When you design a model, you base it around your theories of what you think is happening; more often than not a model will actually do what you want it to do. Fantasy in; fantasy out. In real science and engineering, however, real experiments are used to prove – or disprove – the validity of the model. You accept the results, tweek (or redesign) your assumptions and your model, and either try again or move on.
The CRU emails show the ‘Hockey Team’ unable to accept that dear old Earth is following through with real data that they cannot model: 1255532032.txt
Kevin Trenberth says
“The fact is that we can’t account for the lack of warming at the moment and it is a travesty that we can’t. The CERES data [...published here...] shows there should be even more warming: but the data are surely wrong. Our observing system is inadequate.”
They don’t accept that thousands of monthly temperature readings from all over the globe have to have some validity. Unbelieveable.
OK, back to ‘Harry’ and his activities. I think the key is to look at the idl_cruts3_2005_vs_2008b.pdf file as a before/after of ‘Harry’s’ work. This document shows graphs of seasonal temperatures for two data sets (2005 and 2008b) for world regions. We see ‘Harry’ is struggling to integrate new data and the Read_me file starts in 2006, so the questions are:
- what are the differences between the data sets?
- how much does each rely on ‘synthetic data’?
- how does this output data relate to the classic anomaly graph, when so many of the individual records show little warming?
I have been working on an analysis of this since Friday, but have had too many distractions. It is coming soon though, I promise.