By now, most of us are aware that 97 percent or more of actively publishing climate scientists agree that global warming trends over the past century are most likely due to human activities. But what about the remaining 3 percent that reject this conclusion based on their own scientific investigation? How did they come up with such different results, and do their analyses render the climate consensus incorrect?

To answer these questions, an international team of scientists has attempted to replicate the findings of a selection of climate contrarian papers. Publishing in the journal Theoretical and Applied Climatology, they report that these papers are riddled with false dichotomies, inappropriate statistical methods, and misconceived or incomplete physics, and displayed much the same methodological flaws, with cherry picking - selecting and omitting evidence to suit a bias - as the most widespread. 

"We found that many contrarian research papers omitted important contextual information or ignored key data that did not fit the research conclusions," one of the team, Dana Nuccitelli from Skeptical Science in Australia, writes at The Guardian. 

For example, when analysing a 2011 paper by Humlum et. al.  they found that in order to support the "vague idea" that the lunar and solar cycles can somehow affect Earth's climate, the authors discarded 6,000 years' worth of data because their model couldn't reproduce the temperature changes during that time. "The authors argued that their model could be used to forecast future climate changes, but there's no reason to trust a model forecast if it can't accurately reproduce the past," says Nuccitelli.

Curve fitting, which is where you construct a curve that best fits a given series of data points (in this case temperature data), was also present throughout the 3 percent, the researchers report. "Good modelling will constrain the possible values of the parameters being used so that they reflect known physics, but bad 'curve fitting' doesn't limit itself to physical realities," she writes at The Guardian. "For example, we discuss research by Nicola Scafetta and Craig Loehle, who often publish papers trying to blame global warming on the orbital cycles of Jupiter and Saturn."

Which brings the researchers to their third big criticism of these papers - they not only ignore the scientific consensus on human-caused climate change, but on basic physics as well, Nuccitelli citing a "clear lack of plausible physics" as a common theme. 

The researchers say this with two very clear caveats: one is that they only examined 38 contrarian papers, so can't say that these errors run through all climate contrarian papers, and the other is that they didn't include a control group of papers. Nuccitelli says they have no doubt that if their replication approach was applied to consensus papers, methodological errors would also be uncovered. So yep, that's them making their own error because they didn't include 38 consensus papers in their analysis. "However, these types of flaws were the norm, not the exception, among the contrarian papers that we examined," writes Nuccitelli. 

Lead author Rasmus Benestad from the Norwegian Meteorological Institute makes clear at realclimate.org that the papers they analysed weren't a random selection of climate contrarian papers, which means what they've completed is not a statistical study, but rather an analysis of how valid these papers are based on the replication process. Benestad writes:

"We had been up-front about our work not being a statistical study because it did not involve a random sample of papers. If we were to present it as a statistical study, then itself would be severely flawed as it would violate the requirement of random sampling. Instead, we specifically chose a targeted selection to find out why they got different answers, and the easiest way to do so was to select the most visible contrarian papers."

These different answers are the biggest red flag when it comes to contrarian papers, says Nuccitelli. The consensus papers have all come to the same conclusion, and the contrarian research papers are "all over the map, even contradicting each other". 

While it's great that analysis like this is being done, we'd love to see it done again with a proper control group, scrutiny of consensus papers, and a larger, random sample size of contrarian papers so we can see how widespread these errors are across the board. But in terms of achieving what they set out to do, Benestad's team has done so, and now it's up to the authors of the targeted papers to explain themselves.