The more things change, the more they stay the same.
GISS reports updates to the surface temperature analysis regularly (here). In the last year, for example, there has been a change to use of USHCN v2 data for US stations (November 2009), and most significantly the change to using satellite ‘nightlights’ for global evaluation of the urban status of stations in January 2010. The effects checked against previous versions (NASA Draft paper) state that the major change of using ‘nightlights’ results overall in a very small change in global average temperatures. Here, for example are the small differences between January and March 2010 in the GISS Land-Ocean Temperature Index as a result of the ‘nightlights’ changes:

Figure 1. Changes in Land-Ocean Temperature Index: Difference before (Jan 2010) and after (Mar 2010) implementation of 'nightlight' measurement of station urbanisation.
These differences are tiny. This was a change of the land data and the globe is 70% ocean; sea surface temperature is more stable that land temperatures. Looking at land temperatures only, the change still small but it is five times what we saw above:

Figure 2. Changes in GISS Land-surface air temperature anomalies between September 2009 and June 2010
The changes alter the overall temperature trend by no more than <0.01°C/century, and it bothered me that there is so little change overall. I couldn’t quite put my finger on why until I looked at it in detail. [Peter O’Neill has discussed (here) a number of concerns with the accuracy of locations used and I’ll leave thoughts on that to another day.] Prior to this change only US stations were adjusted using a brightness index, and urban classification for the rest of the world relied on very outdated population figures. Surely, I thought, this change would have had a more noticeable effect. Looking at data for the two hemispheres separately shows up slightly greater change. GISS also publishes data by latitude zone, and the change really does show up in some of the zones. Here’s what the current data split out by latitude zone looks like:
The change by latitude band is interesting. There is almost no change at higher latitudes (Canada, Russia, Northern and Central Europe) but substantial change between 44N and 44S, almost exclusively in older data, prior to 1940. This is not a great surprise actually (more on correction of urban temperatures in a moment).
The differences run to +/- 0.4°C (and these are still averages for each latitude band), but all these zonal changes add up to the average difference (red line) in Figure 2 – there is a lot of ‘cancelling out’ that is masked by the averaged data. At individual locations the changes can be huge and this can be seen in the figure below where location-based anomalies are up to +/- ~3.0°C.

Figure 5. Changes in mapped global anomalies 1900-2009 at 250km resolution due to implementation of 'nightlights' data. (reproduced from Hansen et al., draft paper as above, Figure 2)
[I almost didn’t notice that the data in Figure 5 was for 1900-2009 only.
Why 1900-2009?
The mask on the graph left for data 1880-1900 gives a clue. The most extreme changes are prior to 1900; these are likely to affect a very few stations due to the lesser global data coverage at that point.
What would the anomaly scale in Figure 5 look like if they were included? ]
It amazes me that such profound local changes have ‘no overall effect’, although with the structure of the GIStemp programme I can see how this can be. At one level I do not have a problem with this, but on another level is extremely frustrating. This change is intended to render more accurate the classification of rural and urban stations, and therefore which data records are adjusted for Urban Heat Island correction, and which are rural stations that contribute adjustment. Since the UHI correction is done by warming past temperatures, it is the past that is more substantially changed. Herein lies my problem with it. GIStemp uses only a single designation of the station metadata which is not time dependant. It makes no distinction between ‘what is’ now and ‘what was’ in the past. So a station classified as urban now is regarded as always urban (even if it was rural in the past), UHI correction is applied and this is not a problem. But this does mean that the historical part that was rural, can no longer take part in the correction of other urban stations, and the programme has to reach out further and further to find stations that can do the adjusting (with less and less accuracy).
Looking at the designations of the stations by the new methods (here) and comparing the unadjusted and homogenised stations data (here) there seem to be many ‘quirks’ between what is adjusted and what is not, and this does not always match with the Urban/Semi-urban/Rural classifications listed. I suspect there are many ‘issues’ still being worked out.
My ultimate concern is that the changes really only visibly affect the older temperatures. Once again, we are rewriting the past for comparison with the present and it takes a lot of effort to get this right, which means it is all too easy for gross errors and misrepresentations to slip through. Think about it. GISS is just papering over the cracks with this one.
Your last paragraph is not very well thought out. Yes, GISTEMP’s peri-urban adjustment (step 2) adjusts older temperatures while preserving recent temperatures. But that’s only per station. When gridding (step 3) each station is considered as an anomaly series, so it makes no difference whether you think of that as “warming older data” or “cooling recent data”.
As to papering over the cracks and letting gross misrepresentation through… What would an urban adjustment scheme have to look like in order for you to support it? What corrections can you suggest to the current GISTEMP peri-urban adjustment? (this is not intended to be a rhetorical question, I am in the middle of clarifying exactly this step for Clear Climate Code).
My March blog post on this change is here, by the way.
Aren’t last paragraphs always written in haste? 😉 Actually, I would only change the first sentence of that paragraph slightly.
The whole point of the peri-urban adjustment is that it warms the older data to reduce the effect of UHI. There are stations which historically were rural, but on the basis of the (1996-7) satellite data are now classified as urban; this means these cannot be included as adjusting stations nor can they take part in the GIStemp calculation unless they can be adjusted by other rural stations that have not been reclassified. How is this not rewriting the past or misrepresentation? This means loss of use of, in some cases already sparse, historical data.
There are some improvements as a result of the change, and overall I welcome it, however I stand by saying that there are issues with it, not least the poor accuracy with the latitude and longitude data (and therefore misclassifications) and certainly what I mention above about loss of older data. How widespread are these issues? I can’t say – yet, but they should be a concern.
Regarding the peri-urban adjustment, of course I will ask for the impossible. The best we can hope for is continuous improvement of which the nightlights adjustment is one small step.
1. We need accurate locations for the stations, certainly within a few feet. Then a reevaluation of the night radiance.
2. There simply is not sufficient data on the size of UHI now or in the past, at different locations and under different climate and weather conditions. Even if we had an accurate understanding of current UHI at each affected location that does not help the past. It would need something like accurate population data per station for every decade of its reporting and therefore a subroutine to manage the adjustment. You may think that homogenisation is sufficient to reduce peri-urban effects, but that requires a certainty that we understand how each rural station’s data has been impacted by any change, natural or anthropogenic, in its surroundings. From what I have seen of the data, and of the metadata, that is far from the case. So..
3. We need a very accurate station history.
Yes, I know you have to work with what you’ve got.
I’ll stop there, because I find it very hard to support the notion of a ‘global average temperature’. The climate is changing; teasing out how much results from anthropogenic effects and how much is natural, is better done at a local level. When I first started looking into the information Underpinning global warming, I naively thought that trends were worked out at a local level then averaged. It is very simple, if CO2 is responsible for most of the warming then, on average, we should see warming in most parts of the world; we don’t. That tells me that natural climate drivers are stronger than an insignificant trace gas. At best we see cyclical changes which differ across the world; we’ve been in a warming phase recently, thats all.
You say you find it hard to support the notion of a global average temperature, but then you say the climate is changing. How do we measure how much the climate is changing (on average)? Are you aware that GISTEMP does not in fact measure global average temperature, it shows the change in average temperature anomaly.
You say we don’t see warming in most parts of the world, on what basis are you making that assertion? When you say “we’ve been in a warming phase recently”, how can you say that if you also believe that most parts of the world are not warming? And again, how can you say that if you reject the notion of a global average temperature?
Your shopping list of station metadata is of course welcome, but I don’t think we need it, all the indications are that by and large the existing data is just about good enough to do global analyses. That’s not to say that station locations that are out by over 200 km is a problem we can ignore, we can’t. But on average it tends not to matter much. In fact the scientific community is well aware of the problem, and one of the initiatives to fix it is the surfacetemperatures.org project. Perhaps you’d like to contribute?
But even if you had your shopping list, your list of land use, population, and so on going back 200 years for every station documented with each stations equipment, procedures, and blood-type of all personnel involved… You haven’t said what you would do. What would the analysis look like?
“misrepresentation”? I’m still not sure who is supposed to be misrepresenting what? GISTEMP performs a peri-urban adjustment on the basis of data that is publicly available. This procedure is well documented, and source code is available. Is making a documented adjustment a misrepresentation? It is perfectly possible, at least using ccc-gistemp, to run the GISTEMP analysis without the peri-urban adjustment (as I do in the blog post I linked to earlier). Is that then not a misrepresentation?
Climate is changing; climate has always changed. It is a chaotic system where local microclimate effects can be strong. How can any global average be sure to represent adequately the local nuances and variations (anomalies)? In the existing input data, the spatial and temporal coverage is highly variable. What spatial resolution is needed to capture local responses to changes in long term weather patterns? How can one station, say in the lee of mountains and in an arid rain shadow, area adequately represent an entire region, when another 1-200km away with completely different altitude, cloud cover, rain/humidity and temperature responds differently but is only present in the record for a fraction of the time of the arid one?
Using anomalies is necessary yes, but how can you be sure the response of these two stations will track each other over long periods? The program requires an overlap of only 20 years minimum when they both report. What GIStemp does is assume that one station will always be a good proxy for others in the area.
If climatic systems are cyclical (whether regular or not), modelling in engineering systems aspires to at least 3 full cycles to show stability of the relationship between two datasets. We just don’t have that quantity of data.
Go and look up natural climate cycles and multidecadal oscillations.
Examples: in the Arctic; Madagascar; China; Turkey. You might say these are cherrypicks, but ask yourself this – if a cycle shows up in one station but an adjacent one has continuous warming, which one is right? Is this not a case of the differences I suggest above? In this case should we not be concerned if one or other of them drops out of the data record used by GIStemp?
The scientific community seems more keen to reject the findings of the surfacetemperatures.org project than to embrace its implications (I refer of course to Menne et al, 2010, but perhaps there are more positive discussions behind the scenes now). I’d love to contribute if my spare time wasn’t so scarce already – is there any funding in it? 😉 (I’d love to confirm that Big Oil pays for sceptical blogs, but regrettably nothing has come this way).
LOL. What would I do? – I wouldn’t; I would take an entirely different path based on local effects – details yet to be worked out I guess. Perhaps you can help. If GIStemp bases everything on anomalies, why bother with a baseline period?
You confused surfacetemperatures.org with some other website. surfacestations.org perhaps?
I am not funded to do any of my work on Clear Climate Code. That doesn’t stop me from trying to produce the clearest implementation of the GISTEMP algorithm that I can.
I know you are not funded for CCC – I have read you or Nick Barnes commenting to that effect previously – I was being facetious.
I was not confusing surfacetemperatures.org just misreading the string of letters (since the brain tends to look at the first and last parts of a string). That is the project that was mentioned in Nature in May. I remember. I have ambivalent feelings about it because, on one hand I support the idea of more locally accurate evaluation of temperature variations compared to what we’ve got, but on the other hand, since I see no ‘looming crisis’, I can see no value in wasting the resources that will need to be employed.