Conclusions built on sand

By maxson.erin (Apis mellifera F), via Wikimedia Commons

We conclude that these data demonstrate that there is a low risk to honey bees from systemic residues in nectar and pollen following the use of thiamethoxam as a seed treatment on oilseed rape and maize‘ – the Syngenta study, October 2013

Most people interested in bees and neonicotinoid pesticides probably won’t read the scientific papers on the subject as the papers are technically dense. Many of those who do, including most journalists, will, I guess, most likely read the Abstract and perhaps skip through some of the rest of the paper. The quote, above, from the Syngenta paper is strategically positioned as the last sentence in the Abstract and is commendably comprehensible – many will grasp hold of it and believe that the study demonstrated (not suggested, but demonstrated) that thiamethoxam doesn’t do bees any, or much, harm. Pheww!  That’s good.  The fact that this paper was published in a journal of repute would add to many people’s confidence in the results.  After all, that’s what the refereeing process is for, and top journals should attend carefully to getting controversial and important studies well scrutinised.

Then along come some doubters, who have read the paper more carefully than most of us and start to wonder at the robustness of the findings. In 2014 Prof Jeremy Greenwood published a Guest Blog here which concluded ‘With replications of only two or three, no formal statistical analysis and the results being published in a form so aggregated that it is impossible to assess the variation between replicates, the conclusion that “mortality, foraging behavior, colony strength, colony weight, brood development and food storage levels were similar between treatment and control colonies” is a clear overstatement unless one defines similar so loosely as to be scientifically meaningless. It is difficult to understand how the work came to be published in a refereed journal.’.

In other words, the sample sizes were tiny so it is unlikely that you’d be able to tell much from these data unless the control results were clustered close together and so were the treatment results.

Prof Greenwood’s Guest Blog also mentioned that Syngenta were refusing to share the data with him so that he could check their analysis and its robustness (despite it being a condition of publication in the journal that such should be the case).

This blog (and my Twitter account) nagged Syngenta and the journal to ensure that the data were released and eventually, perhaps reluctantly, Syngenta agreed to do so (see here) although I believe that agreement to supply the data preceded the data themselves by quite some time.

The reanalysis of the dataset is now complete, a paper was submitted for publication by Schick, Greenwood and Buckland in September 2016 and is now published.  Prof Greenwood blogs about that paper here today. Here are some conclusions from the paper itself:

Conclusions derived from the inspection of the data were not just misleading in this case but also are unacceptable in principle, for if data are inadequate for a formal analysis (or only good enough to provide estimates with wide confidence intervals), then they are bound to be inadequate as a basis for reaching any sound conclusions. Given that the data in this case are largely uninformative with respect to the treatment effect, any conclusions reached from such informal approaches can do little more than reflect the prior beliefs of those involved.‘ – the St Andrews University reanalysis of the Syngenta study, January 2017.

And a quote from Prof Greenwood’s Guest Blog of today, ‘What we found was that the confidence limits of the estimates of the effects of thiomethoxam were mostly too wide to reveal anything useful: the results were consistent both with the hypothesis that the effects of thiamethoxam on bees in normal agricultural use were trivial and with the hypothesis that they were large enough to be ecologically or economically important.‘.

Let’s be clear about this, the reanalysis does not ‘prove’ that neonics are harmless, nor does it ‘prove’ they are dangerous to bees – it says that this study adds nothing really to our knowledge of the subject (despite being heralded as important and despite being published in a highly reputable journal).

The fact is that this study was highly unlikely to be able to show anything of value because the sample sizes were small and nature is variable. Syngenta did a duff study that from the outset was unlikely to tell us much. If I were a shareholder I’d wonder where else Syngenta is spending its money badly.  That much is pretty certain, anything more is more speculative.

And it is dangerous to speculate on the motives of people because these are rarely completely clear to us. But it is noticeable, that if Syngenta realised that their study was unlikely to show any differences (because it was a pretty duff study) then that wouldn’t be to the disadvantage of a large international company selling pesticides and enmeshed in controversy about their environmental impacts. It would be much better for Syngenta’s commercial interests if there were studies which clearly demonstrated a lack of a harmful impact on bees, but in the absence of those, it would be quite handy to have studies that failed to show any impact, even if that failure were due to low statistical power.

The authors of the Syngenta paper were clearly aware that their study was a weak one (see the last paragraph of the Methods – arguably the least-read part of any paper?) but that admission did not find its way into the Abstract of the paper, which also fails to mention the sample sizes of the study. Instead the Abstract ends with the very strong, and unsubstantiated, statement quoted at the top of this post.  Because this is science there are enough data in the paper, for those who look, to realise that this is a conclusion built on sand.  And because Prof Greenwood persisted in getting the data and analysing them properly we now know that the sand was of a Saharan scale.

The 36 words at the end of the Abstract are not justified by the data in the paper and the authors must have known that (and if they didn’t, they should have) and so should the journal. Both let down the public and the scientific community by allowing such an obviously over-egged conclusion to be published on such a contentious and important issue.

By Luca Galuzzi (Lucag), via Wikimedia Commons

 

[registration_form]

12 Replies to “Conclusions built on sand”

  1. Not familiar with the journal ResearchGate. Is the title supposed to be ironic? Are we to conclude the editors of this journal are corrupt, negligent or incompetent? And what of the authors of the paper and it’s referees. Have any of these published any comment on how they came to get their conclusions so demonstrably wrong?

    1. ResearchGate is not a journal, rather a site that allows you to search and read publications. The Syngenta study was published in PLoS One which is indeed a high impact journal.

      1. PLOS ONE operates under a ‘pay-to-publish’ concept and whilst the papers submitted are reviewed by a member of the editorial board (with or without external expert advice), the journal only verifies whether experiments and data analysis were conducted rigorously, and leaves it to the scientific community to ascertain importance, post publication, through debate and comment.
        I think that these points are important with reference to the Syngenta paper and the conclusion stated in the abstract.

  2. I use Researchgate quite a bit for the papers it publishes but I regard those in the same way as every source I use ….. with care and thought ….. and sometimes more than a little skepticism………
    I never use Wiki as a reliable source …. but it has its uses .. There you go Phil 🙂
    https://en.wikipedia.org/wiki/ResearchGate

  3. Self-evidently the final sentence in abstract of the 2013 paper should never have been allowed as these conclusions could not be reasonably drawn from the rest of the paper.

  4. If you are the manufacturer of a product that is currently selling well and you find yourself obliged to perform a study into its safety or efficacy, the important thing is to design a study that looks, at first sight, reasonably thorough but which stands very little chance of coming to a damaging conclusion. Then you work hard at crafting the abstract to create the impression you need.
    A cynic might think there are a couple of examples from the GWCT that suggest they are familiar with this approach…

  5. No doubt Defra keeps itself abreast of all the latest relevant research publications (ahem!) but just in case it might have overlooked Prof Greenwood’s analysis I think it would be helpful if people could alert it and their MPs to it. Then when the moratorium on neonicotinoids is reviewed or applications for derogations from it considered they will not be labouring under the misapprehension that Syngenta’s study showed these products to be safe.

  6. I’d imagine Syngenta shareholders expect them to pay for ‘research’ which shows their products are safe so as to keep the profits rolling in. However, PLos One should have a good, long hard look at themselves for allowing that to be published in their journal. A good reputation can be damaged so easily and once academic trust is lost in a journal, no one will take them seriously. I expect a correction may be in order….?

  7. Congratulations to Prof Jeremy Greenwood on an excellent piece of work. Worth noting that if this study had claimed that it showed that neonics could be harmful to bees (just as valid as the false conclusion in the abstract) there would have likely been an outcry from Syngenta, the Science Media Centre, the NFU and their supporters, with demands that the paper be retracted or at least amended, possibly coupled with personal attacks on the authors. Peter Melchett

Comments are closed.