Monday, August 2, 2010

Inquiry about the purpose of re-doing the construction of past temperature variations based on thermometer reports.

who got this inquiry:
"As a general science enthusiast with a particular interest in public controversies about scientific subjects, I've been trying to get myself informed on the science of climate change. I realise you must be very busy, and that you probably get questions like these all the time from laymen, but I would really appreciate your response.
       In a recent article in Der Spiegel, you were quoted as advocating an independent reconstruction of surface temperatures. I can certainly see how this may be necessary for regaining public trust after the past few months. However, a somewhat more skeptical friend of mine has inferred from this that you have significant doubts about the basic conclusions of the temperature analyses (i.e. that we are currently still in a long term warming trend, and have been for the past several decades), and that you expect substantial differences to result from a new analysis. I did not get this impression from the article, but as the original version was in German and my German is quite rusty, my impression may be mistaken. Is my friend correct in making this inference - that is, do you expect significant changes in these basic conclusions from an independent analysis?"

which Hans von Storch answered with: "Your friend is NOT right. I would not expect significant changes in a new analysis, but instead would expect that the thermometer-based temperature results (as opposed to tree-ring estimates) published so far would be almost completely reconfirmed. But when this additional exercise would be done by independent people, the trust in the result, and climate science as a whole, would be significantly increased. Thus, the measure would be needed for public communication, not for purely scientific reasons.
See also our statement in nature online, 18 December 2010".

6 comments:

ghost said...

independent algorirthms, implementations, methods, open source are great and could(!) establish more trust. For example, I met several persons who were skeptical about GISTEMP and the open implementation of clearclimatecode.org and own experienced showed them: GISTEMP is not so bad at all.


However, there is an appreciation for more openness, better documentation, better quality, better everything. For example, a group (MET and others) collects in this nice project ideas about the construction of next generation surface temperature data products: http://www.surfacetemperatures.org/ It is an open discussion.

For me: it is a sign, the current products are great, but of course, not perfect at all. And secondly: it is a sign, one will have a better connection to the society, better documentation, better understanding, so, trying to leave the ivory tower.

Maybe, one can see it also as a maturing process. It will be a service, not "only" a scientific prototype. But, I assume the project won't be cheap...

MikeR said...

I have asked here before: It would seem to me that the state of modern technology should allow a much better and simpler job of measuring surface temperatures than is currently being done. Simple, cheap, sturdy, solid-state thermometers, perhaps positioned by GPS and powered by solar power cells... And lots of them, providing much better coverage than the currently very sparse situation with hundreds of miles between thermometers.
Earth's surface is 5E8 km^2, so one every thirty kilometers would need 5E5 thermometers. (Perhaps one should take the ocean into account.) If each one costs $100, that's $50,000,000, plus the cost of placing and maintaining them...
Place one every hundred kilometers, and the cost goes down to $5,000,000.
Is that too much? I would think that the data would be valuable for many purposes, and it would be nice to really have data, rather than trying to squeeze it out of too few thermometers.

eduardo said...

we got another inquiry from the same reader in the main post:

'If I may ask one final question: as I understand, you have your doubts about the reliability of dendroclimatological reconstructions of past temperatures. Based on what I've read, however, reconstructions based on other proxies typically give similar (though not identical) results. Do you think that the general conclusions based on such paleorecords, i.e. that the MWP was most likely somewhat cooler than today and that the current rate of warming is unusual, are reasonably robust?'



I will try to summarize the state present of knowledge.

Climate proxies can only provide a approximate reconstructions of past climate. In my opinion, the data available so far do not provide a definitive answer as to whether or not the MWP as *globally* warmer or cooler than 'today'. Please, consider that one has to be careful with the use of the word today, as it may refer to the last decade or to the mean of the 20th century. Many proxies only yield information about past temperatures not at interannual timescale but for longer periods, say decades, and so I dint think it is possible to pin down a particular year as the warmest in any period in the millennium. Also proxies represent even in the ideal case, local or regional temperatures, and as we have seen this year for example, temperatures can be very low in some parts of the world and very warm in others. If we lived in year 2500 and tried to reconstruct the temperature in winter 2010 with proxies only from continental Europe or US we would think temperatures had been very cold, when in reality they were very warm globally.
In summary, in some regions it does seem that the MWP was warmer than say the 20th century mean, but the global picture is very uncertain

However, the question of whether the MWP was warmer or cooler then today is only partially relevant. On the one hand, it is relevant in the context of impacts. If the Arctic happened to be warmer around years 1000 than in the 20th century, then the Arctic ecosystem has been exposed to these warmer temperatures before. I think, however that the main message of current climate predictions is not that the present level of temperature is *now* dangerous, but that it will become dangerous in the future, when it should clearly exceed the level attained in the MWP in all regions.

On the other hand, the level of temperatures in the MWP does not affect the other main message, namely that the effect of GHG has been detected in the present instrumental temperature record. This means that one can identify the pattern of temperature change expected from GHG (basically, more warming at high latitudes, more warming in the mid troposphere, cooling in the stratosphere). It does not mean that the amplitude if this finger print is dangerous *now*. It means that this pattern of warming rate is very unlikely compatible with the knowledge of natural variations. The conclusions is that if GHG concentrations continue rising this pattern of warming will intensify and eventually become dangerous. To this conclusion, the temperatures in the MWP do not play any role.

Now sometimes it is can argued that the knowledge of natural variability is still limited, and that perhaps the present pattern of warming was also present in the MWP or other periods for that matter. This might be true in my opinion. But the conclusion ' It means that this pattern of warming rate is very unlikely compatible with the present knowledge of natural variations' holds. It is what we know now, and one cannot invent knowledge from what we still do not know.

eduardo said...

@ 2
Mike,

the main problem in climatology is not to be able to measure accurately the temperature now, but to obtain data over long periods that have been collected in an homogeneous way: ideally, same instrument, same undisturbed location. Accuracy is of of course desirable but not the main issue. In this sense, the needs climatologies and meteorologist are incompatible, and this is at the root of some of the problems in the analysis of the climatological data set. A weather service would like to have very accurate measurements of temperature at a certain date. If two years later new instruments come on to the market that can measure with ten times more precision and ten times more frequently, the meteorologist would be delighted, and the climatologist will get a head ache: the temperature time series before and after the replacement are not directly comparable any more, spurious trends, step changes, may appear, etc.
A bit provocatively, for us it would have been much better if the old instruments used to measure temperature in the country side in the 19th century would have been kept in use unchanged until the present day, even if their accuracy was not that good.
See also the guest commentary by Reinhard Böhm here

MikeR said...

Eduardo, thanks for the very interesting response. I wonder, though, if it's worth giving the climatologist headaches, if in the long run we will have better data going forward. Mr. Climatologist can always keep using his old thermometers if he wants! The rest of us will be able to watch what's happening, over the course of the next decade or whatever.
Otherwise, we will always be hard pressed to be sure whether what we're seeing is because of bad data.

werecow said...

re: Eduardo,

I didn't realize you had replied to my email on this page, so I'm a little late in reading it. Still, better late than never. Thanks for your response!