‘We conclude that these data demonstrate that there is a low risk to honey bees from systemic residues in nectar and pollen following the use of thiamethoxam as a seed treatment on oilseed rape and maize‘ – the Syngenta study, October 2013
Most people interested in bees and neonicotinoid pesticides probably won’t read the scientific papers on the subject as the papers are technically dense. Many of those who do, including most journalists, will, I guess, most likely read the Abstract and perhaps skip through some of the rest of the paper. The quote, above, from the Syngenta paper is strategically positioned as the last sentence in the Abstract and is commendably comprehensible – many will grasp hold of it and believe that the study demonstrated (not suggested, but demonstrated) that thiamethoxam doesn’t do bees any, or much, harm. Pheww! That’s good. The fact that this paper was published in a journal of repute would add to many people’s confidence in the results. After all, that’s what the refereeing process is for, and top journals should attend carefully to getting controversial and important studies well scrutinised.
Then along come some doubters, who have read the paper more carefully than most of us and start to wonder at the robustness of the findings. In 2014 Prof Jeremy Greenwood published a Guest Blog here which concluded ‘With replications of only two or three, no formal statistical analysis and the results being published in a form so aggregated that it is impossible to assess the variation between replicates, the conclusion that “mortality, foraging behavior, colony strength, colony weight, brood development and food storage levels were similar between treatment and control colonies” is a clear overstatement unless one defines similar so loosely as to be scientifically meaningless. It is difficult to understand how the work came to be published in a refereed journal.’.
In other words, the sample sizes were tiny so it is unlikely that you’d be able to tell much from these data unless the control results were clustered close together and so were the treatment results.
Prof Greenwood’s Guest Blog also mentioned that Syngenta were refusing to share the data with him so that he could check their analysis and its robustness (despite it being a condition of publication in the journal that such should be the case).
This blog (and my Twitter account) nagged Syngenta and the journal to ensure that the data were released and eventually, perhaps reluctantly, Syngenta agreed to do so (see here) although I believe that agreement to supply the data preceded the data themselves by quite some time.
The reanalysis of the dataset is now complete, a paper was submitted for publication by Schick, Greenwood and Buckland in September 2016 and is now published. Prof Greenwood blogs about that paper here today. Here are some conclusions from the paper itself:
‘Conclusions derived from the inspection of the data were not just misleading in this case but also are unacceptable in principle, for if data are inadequate for a formal analysis (or only good enough to provide estimates with wide confidence intervals), then they are bound to be inadequate as a basis for reaching any sound conclusions. Given that the data in this case are largely uninformative with respect to the treatment effect, any conclusions reached from such informal approaches can do little more than reflect the prior beliefs of those involved.‘ – the St Andrews University reanalysis of the Syngenta study, January 2017.
And a quote from Prof Greenwood’s Guest Blog of today, ‘What we found was that the confidence limits of the estimates of the effects of thiomethoxam were mostly too wide to reveal anything useful: the results were consistent both with the hypothesis that the effects of thiamethoxam on bees in normal agricultural use were trivial and with the hypothesis that they were large enough to be ecologically or economically important.‘.
Let’s be clear about this, the reanalysis does not ‘prove’ that neonics are harmless, nor does it ‘prove’ they are dangerous to bees – it says that this study adds nothing really to our knowledge of the subject (despite being heralded as important and despite being published in a highly reputable journal).
The fact is that this study was highly unlikely to be able to show anything of value because the sample sizes were small and nature is variable. Syngenta did a duff study that from the outset was unlikely to tell us much. If I were a shareholder I’d wonder where else Syngenta is spending its money badly. That much is pretty certain, anything more is more speculative.
And it is dangerous to speculate on the motives of people because these are rarely completely clear to us. But it is noticeable, that if Syngenta realised that their study was unlikely to show any differences (because it was a pretty duff study) then that wouldn’t be to the disadvantage of a large international company selling pesticides and enmeshed in controversy about their environmental impacts. It would be much better for Syngenta’s commercial interests if there were studies which clearly demonstrated a lack of a harmful impact on bees, but in the absence of those, it would be quite handy to have studies that failed to show any impact, even if that failure were due to low statistical power.
The authors of the Syngenta paper were clearly aware that their study was a weak one (see the last paragraph of the Methods – arguably the least-read part of any paper?) but that admission did not find its way into the Abstract of the paper, which also fails to mention the sample sizes of the study. Instead the Abstract ends with the very strong, and unsubstantiated, statement quoted at the top of this post. Because this is science there are enough data in the paper, for those who look, to realise that this is a conclusion built on sand. And because Prof Greenwood persisted in getting the data and analysing them properly we now know that the sand was of a Saharan scale.
The 36 words at the end of the Abstract are not justified by the data in the paper and the authors must have known that (and if they didn’t, they should have) and so should the journal. Both let down the public and the scientific community by allowing such an obviously over-egged conclusion to be published on such a contentious and important issue.