Scientists Respond To Tol’s Misrepresentation Of Their Consensus Research
By Collin Maessen on commentTo quote John Reisman, “Science is not a democracy. It is a dictatorship. It is evidence that does the dictating.” It’s this evidence based ‘dictatorship’ that is the basis for a scientific consensus. Based on this ‘dictatorship’ of evidence we know that global warming is real, we’re causing it, and that it’s a problem if we don’t act. This presents a real problem for those denying that there is a problem or want to minimize the consequences.
In science careers are made by overturning existing ideas and findings. This is why a consensus can only arise in science when other scientists cannot find flaws in earlier findings. As Richard Alley has said about showing that global warming isn’t a problem: “Is there any possibility that [among] tens of thousands of scientists there isn’t one of them that got the ego to do that? It’s absurd!”
Falsifying a well established scientific theory or concept advances the career and the reputation of a scientist far more than confirming it does.
Attacking the consensus
This is why the paper published in 2013 by Cook et al. that analysed the scientific consensus on human caused global warming in the scientific literature is attacked so much. Finding a consensus of 97% in the scientific literature that we’re causing most of the rise in temperature is inconvenient for them. If they manage to discredit these studies, or sow doubt about them, you’ll prevent the public from acting:
Like Oreskes said, spreading doubt is the most effective strategy a science denier has. This type of attack is crucial to maintain a gap between what scientists agree on and what the public thinks scientists agree on so you can delay action. This tactic is how the tobacco industry successfully delayed action against the harmful effects of smoking for decades.
A consensus is dangerous for those who deny human caused global warming, or minimize its consequences, as it shows, that they are in the minority. They are a very small percentage compared to the scientists who say, based on the evidence, that human caused global warming is a reality.
That’s why one of the most often used tactics is trying to make the consensus seem tiny. During an interview with me Dana Nuccitelli, one of the authors of Cook 2013, explained how this tactic is used on Cook 2013:
Tol’s nonsensus
Which brings me to the latest attack by one of the most persistent attackers of Cook 2013: Richard Tol.
He has a history of attacking Cook 2013 with strange claims about flawed methodologies and saying that the data doesn’t show a 97% consensus. Which is rather odd as the 97% was also found when Cook asked authors what the position of their paper was.
What makes Tol’s persistence in attacking Cook 2013 still stranger is that Tol has said that “There is no doubt in my mind that the literature on climate change overwhelmingly supports the hypothesis that climate change is caused by humans. I have very little reason to doubt that the consensus is indeed correct” and that “The consensus is of course in the high nineties.”
But then he publishes this chart:
He published this graph in his blog post More nonsenus [sic] (archived here) in which he announced a response to Cook 2013 that he has written (a comment to Cook 2013 that’s currently being reviewed by the journal Environmental Research Letters). This graph strongly implies that Cook 2013 is the outlier with the consensus it found, when it isn’t.
I’ve already mentioned to Tol that context is everything when talking about consensus percentages. This context is what makes it clear that what Tol has produced is a perfect example of nonsensus (not the other way around). He uses the tactics I mentioned earlier, and a lot more that I haven’t mentioned yet, to make it look like the consensus is lower than it is.
So how did Tol manage to make it appear like Cook 2013 is an outlier? When I asked Nuccitelli this question he gave me the following answer:
What Tol has essentially done is to define “consensus” in a number of different ways, most of them making no sense whatsoever, and put the incomparable results together on a single chart in a grossly misleading manner. For the literature surveys, in some cases he’s included papers that don’t take a position on what’s causing global warming (i.e. in Oreskes’ 75%), and in others he’s omitted them (i.e. in Cook’s 97%). For the scientist surveys, in some cases he’s included both expert and non-expert opinion, sometimes just experts, and in some cases he’s even included numbers that only represent the ‘consensus’ among those who reject human-caused global warming, most of whom have no expertise in climate science. These are not comparable numbers, and putting them together on one chart makes no sense.
Yes, it’s indeed this bad. It’s something I immediately noticed when I looked at the graph and the studies he cited. But you don’t have to take Nuccitelli’s or my word on this. I contacted the authors of the cited consensus studies and I got back some scathing responses.
Misrepresenting research
When I asked Naomi Oreskes if Tol had used her data and findings correctly she wasn’t happy with Tol changing a consensus of 100% into one of 75% (bolding mine and link mine):
No it is not accurate. As usual, he is misrepresenting scientific work, in this case mine.
Obviously he is taking the 75% number below and misusing it. The point, which the original article made clear, is that we found no scientific dissent in the published literature. This demonstrates that the “dissent” that was being reported in the media was politically-driven, not scientifically driven, which was, of course, the point of the paper, and led to our book, Merchants of Doubt, which explains where the political dissent comes from.
Oreskes is referring to a passage in her article where she said that of the papers she investigated 75% either explicitly or implicitly accept the consensus view (which is in her paper defined as most of the observed warming over the last 50 years is likely to have been due to the increase in greenhouse gasses). However, 25% of the papers she investigated didn’t say anything on human caused global warming. But none disagreed with the consensus position.
This is how Tol changed a consensus of 100% into 75%, by counting papers that did not say anything about the question Oreskes was trying to answer. Bart Verheggen, another cited author, publicly stated to Tol about this that “You can’t just divide the number of affirmative statements by all papers in the sample, if many papers didn’t actually stake out any position on the question at hand. The latter should logically be excluded, unless you want to argue that of all biology papers, only 0.5% take an affirmative position on evolution, hence there is low consensus on evolution.”
The effect of an evidence based consensus is that what already is established will seldomly be mentioned in a paper, an effect Cook 2013 very clearly found with their large sample. Verheggen gave more detail when I corresponded with him about Tol’s usage of his research (bolding mine):
Tol selectively quotes results from our survey. We provided results for different subsamples, based on different questions, and based on different types of calculating the level of agreement, in the Supporting Information with our article in ES&T. Because we cast a very wide net with our survey, we argued in our paper that subgroups based on a proxy for expertise (the number of climate related peer reviewed publications) provide the best estimate of the level of scientific consensus. Tol on the other hand presents all subsamples as representative of the scientific consensus, including those respondents who were tagged as “unconvinced”. This group consists to a large extent of signatories of public statements disapproving of mainstream climate science, many of whom are not publishing scientists. For example, some Heartland Institute staffers were also included. It is actually surprising that the level of consensus in this group is larger than 0%. To claim, as Richard Tol does, that the outcome for this subsample is somehow representative of the scientific consensus is entirely nonsensical.
Another issue is that Richard Tol bases the numbers he uses on just one of the two survey questions about the causes of recent climate change, i.e. a form of cherry picking. Moreover, we quantified the consensus as a fraction of those who actually answered the question by providing an estimate of the human greenhouse gas contribution. Tol on the other hand quantifies the consensus as a fraction of all those who were asked the question, including those who didn’t provide such an estimate. We provided a detailed argument for our interpretation in both the ES&T paper and in a recent blogpost.
Cherry picking is the tactic of focussing on specific pieces of data, often out of context, while excluding any data that conflicts with the desired conclusion. Verheggen’s research was in a similar way misrepresented by Rick Santorum during an interview.
Verheggen also raised very clearly what I also already mentioned to Tol: expertise matters. It’s these detailed questions that made it possible for Verheggen to find the interesting result that attribution experts, scientists who investigate what is changing temperatures and by how much, say that humanity has caused more than 100% of the warming (natural trends and factors would make temperatures drop slightly if we weren’t increasing greenhouse gasses).
The problem with Tol ignoring expertise for his consensus percentages was spelled out by William Anderegg when I asked him if Tol had used his results correctly (bolding mine):
This is by no means a correct or valid interpretation of our results. For our sampling strategy, we bent over backwards to include as many doubters as possible within our sample, so analyzing the whole sample is completely misleading and misrepresenting our study. We showed that 50% of the doubter group had *zero* publications in the peer-reviewed climate literature whatsoever, and 80% had fewer than 20 publications, which was our cut-off to be included as an expert. The basic premise of analyzing expert consensus is that you should only count the views of true experts in the subject. You wouldn’t count the opinions of astronomers on the best heart surgery technique. Thus it makes no sense at all to count the vast numbers of non-experts doubters included in our sample. We showed in a follow up example that a large fraction didn’t have a Ph.D. at all and those that did were primarily in fields almost entirely unrelated to climate science.
Neil Stenhouse, another cited author, had the same issue with how Tol calculates his consensus percentages and highlights a point why not all studies can be directly compared (bolding mine):
Tol’s description omits information in a way that seems designed to suggest – inaccurately – that the consensus among relevant experts is low. This is contrary to the conclusion we took from our data, which is that there is a high level of consensus among actively publishing climate experts. Tol’s reference to “subgroups” generally, as if the nature of the subgroups were irrelevant, omits the fact that the subgroup with the highest level of expertise is also the subgroup with the highest level of agreement that global warming is human-caused.
Tol also omits something else we mentioned – that our estimates of consensus may be conservative, given that (due to an oversight) we asked about the past 150 years of global warming, rather than the past 50 or so years (the period commonly studied for signs of human causation). Several respondents emailed to suggest their answer would have been different if we had asked about the last 50 years.
Because he omits this kind of information that is centrally relevant to interpreting the numbers correctly, despite clear discussion of it in the article text, I have to wonder about his commitment to clarifying these matters for readers – his claimed motive for writing the comment.
Peter Doran really hammered the point about how Tol is incorrectly comparing datasets and results. He also very clearly spells out why expertise matters (bolding and link mine):
Well, I would never express it that way. I’ve attached the EOS paper which is very short and readable.
We sent a survey to 10,257 Earth Scientists listed in the AGI directory. [Of these scientists] 3146 people responded, 90% of this group answered that they agreed temperatures were increasing, 82% expressed the view that humans have played a significant role (exact questions in the attachment). […]
Our results showed, that when you focus on the most knowledgeable group with regards to climate – those who self-identify as climate scientists and are active in climate research and publication […] – this subset has the strongest response to Q2 about the human influence. They are >97% in support.
But all 97% are not equal. The 97% in the Anderegg study and the 97% in the Cook study all address slightly different things and groups. To properly state our 97% it would be “>97% climate scientists, who are actively publishing in the field, think human activity is a significant contributing factor in changing mean global temperatures since the industrial revolution.”
All the other numbers Tol throws out are subsets from less qualified people in our survey. It’s also not all of them. The EOS paper was based on my student’s thesis (which is referenced in the EOS paper) where the full survey and details are kept. It was much too big for an EOS article. So Tol lists the opinion of our lowest groups only with equal weight to our highest group and overall result. In fact, in the survey there were 25 categories of expertise. So if he wants to take the approach he’s taken, he should be pulling 27 numbers from our survey (these 25 categories, plus the overall number, plus the 97% for publishing climatologists). But wait, why stop there, to be complete you would have to include another 25 numbers for all of these groups for people who are not active publishers, and another 25 for people who are active publishers, and another 25 for those who have PhD’s, another 25 for those who have MS degrees, etc. That’s not how you do statistics. Our study focused on expertise. The conclusion was that the more expertise you have in climate science and/or being an active researcher, the stronger your support for humans playing a significant role. To pull out a few of the less expert groups and give them the same weight as our most expert group is a completely irresponsible use of our data. It would be like me having a medical team tell me I need surgery to remove a life-threatening malignant growth, but going to my local Starbucks to get the opinion of the team of baristas and giving both recommendations the same weight.
The two citations of the work of Hans von Storch and Dennis Bray are the only ones that are the least problematic citations (one of the percentages is correct). Though here Tol again didn’t account for no responses or correct the data in any way so the different consensus percentages can be compared.
There are more issues in the comment Tol has written. The cited authors responded to one paragraph in Tol’s comment, there are four more paragraphs.
Same refuted claims
These paragraphs contain old accusation that were already refuted by the authors of Cook 2013 or by fellow skeptics. Some of which by me.
I already started writing about the strange claims made by Tol in 2013. The last time I wrote about Tol was when he got a severely flawed paper published criticising Cook 2013. The authors highlighted 24 errors, one of them being Tol misrepresenting stolen private correspondence. More details can be found in my article Richard Tol’s 97% Scientific Consensus Gremlins.
Several major issues I mentioned in that article I had already highlighted in Richard Tol Versus Richard Tol On The 97% Scientific Consensus. You can also read more about one of the dubious papers Tol cited in 97% Climate consensus ‘denial’: the debunkers again not debunked.
You can read more about Tol misrepresenting research by the authors of Cook 2013 in their articles 97% global warming consensus meets resistance from scientific denialism, Climate contrarians accidentally confirm the 97% global warming consensus and in of course their 24 errors document (full disclosure: I was one of the reviewers of the 24 errors document and I’ve contributed a couple of small sections to it).
Fellow skeptics also engaged with Tol in comment sections to point out a myriad of issues and mistakes in his claims. You can read those comments below the articles Deconstructing the 97% self-destructed Richard Tol, The fall and fall of Gish galloping Richard Tol’s smear campaign, The Evolution of a 97% Conspiracy Theory – The Case of the Abstract IDs and More nonsense – sorry, nonsensus – from Richard Tol.
There’s far more, but this is enough to establish a pattern of refusing to correct mistakes and incorrect claims.
Conclusion
I think Tol made a big mistake here by trying to paint Cook 2013 as an outlier, as shown by the responses that I got from the authors of the cited consensus studies. Two days before Tol published his Cook 2013 comment I had already warned him that Verheggen, Doran, and Anderegg would not agree with the conclusions he was drawing from their papers and data (he was publishing previews of the graph on twitter). A warning that he dismissed.
I don’t know why Tol has such a dislike for Cook 2013, but it has driven him to reject evidence that shows his claims have no merit whatsoever. But this type of behaviour is nothing new. When Andrew Gelman, a statistical heavy weight, analysed and critiqued a paper from Tol he got very defensive. To the point that Gelman saw the need to say the following:
There’s no shame in being confused—statistics is hard. But if your goal is to do science, you really have to move beyond this sort of defensiveness and reluctance to learn […] I’m sure you can go the rest of your career in this manner, but please take a moment to reflect. You’re far from retirement. Do you really want to spend two more decades doing substandard work, just because you can? You have an impressive knack for working on important problems and getting things published. Lots of researchers go through their entire careers without these valuable attributes. It’s not too late to get a bit more serious with the work itself.
Gelman wrote that in May of 2014. I truly hope that Tol’s current attempt of critiquing Cook 2013 is not a positive answer to the question about the two decades of substandard work.
Featured comments
-
It is of interest to note that Tol here compares the 75% of Oreskes with 33% found by Cook et al. However, he does *not* include this latter number in his figure, but only includes the numbers where the “no position” papers are excluded, and thus yield a 97% consensus. He did not exclude the “no position” papers for Oreskes, which would yield a 100% consensus. Thus, he has treated two papers in a different way, and I am having a hard time considering this an accidental mistake.
-
The same applies to the results in Verheggen et al. who report a consensus of 97 [93 – 99]% for AR4 WGI authors who expressed a position with respect to AGW, but which Tol reports as 79%. I am also having a hard time considering this an accidental mistake. In fact “hard time” is an under-statement.
Great article, Collin. Terrific to see the responses you got from everyone. And thanks for the links.
What Sou said.
Great for those who don’t wanna get bogged down in the details too. Multiple researchers saying a paper is misrepresenting their work makes = big bright red flag.
It is quite a spectacle, with people taking issue with their own research.
The Oreskes-Cook comparison is most clear-cut, because they used the same approach and even partly the same categorisation.
Oreskes finds 75% in agreement, 25% no position, and 0% disagreement.
Cook find 33% in agreement, 66% no position, and 1% disagreement.
How could anyone claim that Oreskes and Cook find the same thing?
The authors aren’t taking issue with their own research, they’re taking issue with what you did to it. And again, you don’t include papers/responses that don’t answer the question you’re trying to answer. That’s not how you do statistics and you know that (it’s impossible not to know this with your education level).
Or take Verheggen’s study. In the sample selection phase, they decided that these are the people they want to ask. After having seen the answers, they decided that some of their interviewees are not worth listening to.
Richard, that is one hell of an accusation you’re making when this was explained to you by Verheggen. To quote from the above article:
Verheggen also explains this further on his own website with the following analogy:
With your comment you’re implying that Verheggen is massaging the data in such a way to get the answer he wants. This is not what they did as what experts say matters when you’re trying to measure consensus. I doubt you would appreciate it if this was done for a question regarding expert opinion on economy.
But if Verheggen did not accept the expertise of these interviewees, why did he interview them in the first place?
It’s not about Verheggen not accepting expertise as you put it. It’s explained in his paper, that you should have read, and which I already pointed out to you:
In other words this is used to see how expertise influence the consensus percentage (or what the expertise is of those that reject the consensus). This should be obvious from Verheggen’s paper, the response from Verheggen that I quoted in the above article, his responses to you on Twitter, and his own blog post.
This subgroup again shows how important expertise is for accepting the consensus position (i.e. knowledge). It’s this data that shows how irrelevant this group is in regards to representing climate science accurately.
It is of interest to note that Tol here compares the 75% of Oreskes with 33% found by Cook et al. However, he does *not* include this latter number in his figure, but only includes the numbers where the “no position” papers are excluded, and thus yield a 97% consensus. He did not exclude the “no position” papers for Oreskes, which would yield a 100% consensus. Thus, he has treated two papers in a different way, and I am having a hard time considering this an accidental mistake.
The same applies to the results in Verheggen et al. who report a consensus of 97 [93 – 99]% for AR4 WGI authors who expressed a position with respect to AGW, but which Tol reports as 79%. I am also having a hard time considering this an accidental mistake. In fact “hard time” is an under-statement.
97 vs 79, return of the gremlins?
Better than -97, I guess.
Colin,
“This subgroup again shows how important expertise is for accepting the consensus position (i.e. knowledge)”
That’s a curious equation:
the consensus position = knowledge.
Not just curious. It also contradicts almost 3000 years of Western epistemology, according to which:
knowledge = justified true belief.
After all, you’d surely agree that a number of consensus positions are, or have been discovered to be, UNTRUE.
For example, the consensus of just about every Western philosopher since Aristotle has been that:
knowledge = justified true belief.
Yet you think this is NOT true. Rather, you think that knowledge = the consensus position.
So it’s ironic that you reject the position of just about every Western philosopher since Aristotle.
I wonder where you can possibly have gotten the peculiar notion that
the consensus position = knowledge.
[snip]
Surely not. That wouldn’t be very skeptical of you!
Not what I’m saying at all. From the start I’m already saying that a consensus arises from a preponderance of evidence all telling the same thing. Expertise means that you have certain knowledge of a subject matter. So you’re aware of this preponderance of evidence, or maybe lack off, and this will make you aware of this knowledge based consensus.
Also you might want to watch the following video, it deals with your untrue consensus position claim:
Thanks Collin. I’m glad this sentence:
“This subgroup again shows how important expertise is for accepting the consensus position (i.e. knowledge)”
did NOT mean to equate “the consensus position” with “knowledge.”
I’m glad you agree that they are NOT the same thing. I wonder why Naomi Oreskes writes in Merchants of Doubt (2010) that [my emphasis],
“So we can think of scientific knowledge as a consensus of experts. We can also think of science as being a kind of a jury, except it’s a very special kind of jury. It’s not a jury of your peers, it’s a jury of geeks. It’s a jury of men and women with Ph.D.s, and unlike a conventional jury, which has only two choices, guilty or not guilty, the scientific jury actually has a number of choices.”
And one still wonders why you wrote “i.e. knowledge” after the phrase “the consensus position.”
“i.e.” means “id est,” and “id est” means “that is.”
So what were you trying to get across, if not that “the consensus position is knowledge”?
You hypothesise that
“a consensus arises from a preponderance of evidence all telling the same thing.”
It may or it may not. It could arise from all sorts of things—nobody knows, since (until climate science came along) it was extremely rare to conduct position surveys on scientists. So there simply isn’t enough data to justify such a sweeping new law of nature.
Take the blinkered refusal by the chemistry profession to even consider the existence of quasicrystals—Dan Schechtmann practically had to force his colleagues to look down the lens of his microscope.
Yet if the opinion that quasicrystals were nonexistent was shared by a large majority of chemists, as it anecdotally appears to have been, did this consensus “arise from a preponderance of evidence all telling the same thing”?
No.
It apparently arose from a lack of curiosity.
Brad
PS: I began watching the video, until (at about the 20-second mark) it had become unequivocally obvious that the presenter was NOT trained in epistemology, and certainly not in scientific epistemology.
Like I said a consensus arises from a preponderance of evidence. That’s all I’m saying, not that knowledge is consensus (neither is Oreskes). If you had watched the video I linked to for context this should have been obvious. The same video also tackles how reaching a consensus isn’t a perfect process and can take some time to get there (though your example of quasicrystals lacks some contextual details). But in the end it is evidence that will win and positions are adjusted accordingly. So please watch the video to understand what I’m talking about.
My apologies. I’ve watched the video now. How can I put this nicely? I recommend you get your scientific epistemology from a scientific epistemologist, not a Graduate Student in Env Sci & Policy.
After all, expertise matters…. right?
“That’s all I’m saying, not that knowledge is consensus (neither is Oreskes).”
She’s saying that scientific knowledge is a consensus of experts. I won’t spam the thread by repeating the quote; you can go back and read it. I’m glad you avoid making the mistake she makes.
Dismissing someone without actually pointing out what’s wrong is not a valid counter argument. Expertise is a proxy for trustworthiness and the likeliness of someone being correct. It does not give you carte blanche for dismissing someone as it doesn’t tell you if the information is correct or not.
Also lets address the Oreskes quote you’re using. It’s not from the 2010 book Merchants of Doubt (it’s not in there, I checked). It’s from a Ted Talk. Here’s the full context of what you’re quoting:
These paragraphs from the transcript precede what you’ve been quoting. So again, Oreskes isn’t saying consensus is knowledge but consensus arises from the available evidence. So she’s saying the consensus is a reflection and consequence of this knowledge.