Rejected. Pt. II
In part I we looked at the Margin of Error (MOE) of the Brown et al. survey of climate scientists. Assuming a random sample, the poll displayed a range of opinion among climate scientists concerning the position set forth by the IPCC. Even taking into account the fairly large MOE, the poll indicates a lack of complete consensus. There are, however, a number of other sources of error which in my view would and should prevent any half-decent journal publishing the poll in its present from (E&E would jump at it, I'm sure).
According to Harris Interactive, other common poll errors include:
- Non-response errors
- Errors due to question wording or order
- Errors due to interviewers
- Weighting errors
Let’s look at the first error as it would possibly apply to the Brown et al. poll.
Of the 1807 people who were contacted, 140 responded. From what I understand of political polls, this is an average response rate. Did the sampled persons differ, as a group, in any meaningful way in their preferences and attitudes to those who did reply?
Quite possibly.
The authors analyse this problem in their paper:
Most importantly, the lack of replies from whole demographic groups not only serves to invalidate the comparison between these groups, it potentially invalidates the entire survey.
James Annan sort of gets it here:
The problem with this specific poll is that not only have entire countries been skipped, it's entirely likely that the S. Freds and the Pat Michaels would have been far more likely to respond. A statistically significant denial of the IPCC position (on the 'it’s all overblown' side, of course) would help them propagate the 'consensus is dead' meme they work so hard in spreading (As an aside, why is it that denialists never seek to kill the 'IPCC consensus' by showing that many scientists and their peer-reviewed papers clearly believe that the IPCC is being far too conservative? It’s almost as if consensus-breaking only works one way. Disingenuous or dishonest? You decide.)
On its own, the non-response error is possibly not fatal – but the errors are sure starting to add up.
More in Pt. III.
According to Harris Interactive, other common poll errors include:
- Non-response errors
- Errors due to question wording or order
- Errors due to interviewers
- Weighting errors
Let’s look at the first error as it would possibly apply to the Brown et al. poll.
Of the 1807 people who were contacted, 140 responded. From what I understand of political polls, this is an average response rate. Did the sampled persons differ, as a group, in any meaningful way in their preferences and attitudes to those who did reply?
Quite possibly.
The authors analyse this problem in their paper:
On the coverage and responses, there are large discrepancies between the numbers of responses from various countries. The lack of response from China, along with the number of ‘message failure’ automated response, suggest that few, if any, of the scientists in that country received the email. This is interpreted as a function of a server error or malfunction. The relatively large responses from the United States and the United Kingdom are, at least in part, a function of the language in which the poll was constructed (although almost all climate change research is in English); no translations were made; all enquiries were in English. It should also be noted, though, that the Global community of scientists involved in climate related disciplines is heavily skewed, with a large proportion of the work taking place in US and EU academic and state institutions. Therefore, though the language bias is likely to have suppressed the level of response from countries where English is not the common language, the international range and proportion of responses is interpretable as broadly representative of the community as a whole.OK, so far, so good. But...
One consequence of the diverse and relatively low response rate from countries other than the USA and, to a lesser extent, the UK, however, is that no statistically meaningful international comparisons can be made at this time, though a comparison of scientific opinion from those who responded within the USA and in `other countries’ collectively is possible.If no statistically meaningful international comparisons can be made, according to the authors’ own opinion, why is the following in the main body of the paper?
In addition, responses were broken down by country of response. By applying a numerical value to the responses it is possible to see interesting differences between opinions within the USA and outside, in particular in EU countries. The mean score was 5.0, (where 5.0 means agreement with the IPCC WG Report). In the USA, the mean response was 4.8, compared to 5.2 in all other countries, and 5.6 in EU countries. The scientists based in the USA who replied to the survey are slightly more in disagreement with the Report than scientists outside, and scientists based in the EU (with particularly strong signals [5.9] from a small sample coming from Germany), tend to be more ‘alarmed’ than in other countries. Another small response, from Mexico, showed anomalously large concern, scoring 6.3).Gawd, at least put standard deviations in!!
Most importantly, the lack of replies from whole demographic groups not only serves to invalidate the comparison between these groups, it potentially invalidates the entire survey.
James Annan sort of gets it here:
Of course the main weakness is in the response rate of ~10%: that leaves open the possibility that the 90% non-responders were either all firmly supportive of the IPCC and saw the poll as a bit of irresponsible trouble-making that didn't justify a response, or all so thoroughly alienated and marginalised by the IPCC that they don't have the energy to grumble about it. Personally, I think the first of these is much closer to the truth, but it seems we will never know for sure.But then completely misses the mark:
Of course, all surveys suffer from this problem to some extent. I bet all the current polls on Clinton vs Obama have enough refusals to completely dominate the result, were they all to end up on one side of the fence. Yet you don't see reports saying "Clinton 22%, Obama 24%, and the other 54% slammed the phone down".Just having people refuse isn’t necessarily the problem. It's only a problem if those people refusing would have answered, as a group, differently to those who did reply. As I said in Pt. I, Australian polls, often with high refusal rates, are generally extremely accurate, with the final election results usually falling with the MOE of most of the big polls (It helps a lot that we all have to vote so the survey can be compared directly to the results – unlike the US).
The problem with this specific poll is that not only have entire countries been skipped, it's entirely likely that the S. Freds and the Pat Michaels would have been far more likely to respond. A statistically significant denial of the IPCC position (on the 'it’s all overblown' side, of course) would help them propagate the 'consensus is dead' meme they work so hard in spreading (As an aside, why is it that denialists never seek to kill the 'IPCC consensus' by showing that many scientists and their peer-reviewed papers clearly believe that the IPCC is being far too conservative? It’s almost as if consensus-breaking only works one way. Disingenuous or dishonest? You decide.)
On its own, the non-response error is possibly not fatal – but the errors are sure starting to add up.
More in Pt. III.