On the RSA disinviting me from chairing Professor Susan Greenfield

Briefly (from work, just out of ward round, with slow NHS internet): 

Two months ago I was invited by the Royal Society of Arts (RSA) to chair an event where Susan Greenfield (previous director of the Royal Institution) was to present her scientific theories on “Mind Change”. They hoped that I would be able to listen, and then offer her some challenging questions about the evidence for her case (because her argument is widely regarded as flimsy). I was happy to take a few hours away from clinical work, as I thought this could be an interesting event. My concern is that Professor Greenfield has avoided setting out her frightening scientific theories clearly, with evidence, to her scientific colleagues, for informed critique. That is why I wrote this piece, in 2011:


So I said yes.

This event was announced by the RSA with me as chair.

This event was posted on the RSA website with me as chair.

Then something changed.

When Professor Greenfield was told about this plan, she refused to have me as chair, and the RSA disinvited me.

I think this is a shame. They did suggest that I should “debate” with Professor Greenfield on “Mind Change”. As I explained to them: I have no special expertise, knowledge, or position on “Mind Change”. I don’t feel I’m in a position to write and deliver a 20 minute presentation on it. I’ve not just published a book on it. My concern is simply that Professor Greenfield is misusing science, and expressing her claims inconsistently, and I’m very happy to discuss that publicly. 

(If anything, despite the poisonous things Professor Greenfield has said about me, I’m a little concerned not to be too ad hom here: I can’t picture what a formal debate title would be. “Frightening scientific theories should be published clearly in a scientific journal”? “Professor Greenfield misrepresents evidence and avoids forums where serious discussion of her ideas can be offered”? I don’t feel comfortable with a debate title built around an individual in that way.) 

Matthew Taylor, the Director of the RSA - doing the same job as Greenfield at the RI - telephoned to explain their position. He was very concerned, understandably, to protect the reputation of the RSA. I said that I couldn’t commit to take time out of work to write and deliver a lengthy presentation for him on “Mind Change”, but that I was happy to be a “respondent”: to set out my concerns about the preceeding case made by Professor Greenfield (“Why much of what you’ve just heard is a distortion of science, and why the RSA probably isn’t the best place to resolve that problem.”). That was a no go. 

I will very happily do this, at any prominent talk in London by Professor Greenfield where someone wishes to invite me. 

I can think of many others from the scientific community who may also be willing to do this, across the country. 

I also think it would have been very useful and informative, and fair, to have questioned Greenfield about her claims, at this RSA event today. 

It seems to me that this episode illustrates one point very clearly. Science is about expressing your hypothesis clearly, and consistently, in a forum that welcomes and facilitates critical appraisal of the evidence presented. That is why we have academic journals, and academic conferences. It seems to me that Professor Greenfield has sidestepped this healthy process. In doing so, she has created a situation where it is impossible to critically appraise the evidence for her claims, without her being able to present this as some kind of ugly personal attack. That strikes me as extremely unhealthy. 

Here are some good recent pieces on Professor Greenfield’s claims by academics, clinicians, and science communicators:





Once again: if you are organising a talk by Professor Greenfield, invite me to come, and respond, set out the misuse of science - and ask questions - afterwards.

I am very keen. 

I will bring slides. 

email: ben@badscience.net

web: www.badscience.net

twitter: @bengoldacre


This piece from 3/10/14 is very good:


Should you liveblog your cancer?

Here’s a squabble that’s irked me: the criticisms some have received, from conventional media commentators, for live tweeting and blogging their cancer.


The core misunderstanding here seems to be around what social media is. To many professional journalists it’s a job, a public performance, and perhaps a strategic public service. To most people, it’s just something you do. I think the people who are using social media to discuss their illness are in the latter category, just using social media socially - even if some of them get a larger audience - but they’re being judged by some as if they’re engaging strategically with a work tool.

Anyway, I must sort out badscience.net so there is a place to post banal passing thoughts like these… 

My comments to IoM on iPD sharing

[I’m very annoyed as Webex failures have just stopped me speaking at an IoM panel on individual patient data sharing for clinical trials. As a futile outlet, without context, meaningless to any more than about a dozen people, here are my notes on what i was going to say - very quickly - over five minutes. I’ll polish these into a paper over the next day or two.] 

I won’t duplicate what’s been said today, much of which has been extremely positive and practical on data sharing.

undoubtedly there are huge gains for patients on the table: validating trialists’ own analyses of their own data, doing IPD meta-analysis for greater accuracy, better network meta-analyses for comparative effectiveness research, and subgroup analyses.

I will add some specific points.

1. there are some areas where urgent action is needed to prevent further problems:

            (a) we need to change consent forms, legal agreements, immediately, where they act as a barrier to sharing IPD

            (b) we need to consider seeking out all available IPD urgently, and taking it into archive under lock and key, while preparations are made for clean release, to insure against future data losses

2. related to that, good that people are discussing how to make perfect commensurable datasets, but difficult formats shld never be an argument against release; it’s commonplace in epidemiological research to spend time fighting and merging difficult data into different software.

3. it’s important that academics are held to the same standards as industry, and that the ambition is all trials of all uses of all treatments currently being prescribed. to be clear, that includes phase 4 trials, trials in BRIC nations, trials on unlicensed uses, etc. these are the trials used to inform current practice.

4. there is extensive previous experience on sharing rich IPD eg GPRD, and previous IPD meta-analyses going back to work on aspirin in the 1970s. this expertise and experience should be exploited.

5. regarding the technology, i understand some companies are working with SAS for pooling data behind a firewall. it’s important that any systems created over the coming years are open to deposit of data from people who are not part of any such consortium.

6. we need to ensure we know what subset of all trials is being released as IPD, as it’s unlikely to be all, if only for practical and legal reasons.

7. to put that subset fully into context, we need to know summary results of all trials conducted. we know that summary trial results are still routinely withheld, and that trials are still conducted and published that havent been properly registered. if we can at least - finally - get summary results of all trials, we can then see if the trials sharing IPD are systematically different to those which are not. this is likely to be the case, since (in a worst case scenario) a trial with strong positive results that has been published and used in promotional activity is likely to be in more accessible condition than a trial which found no benefit for a new treatment and was consequently neglected.

8. the importance of retrospective access must not be forgotten. EFPIA and PhRMA and EMA proposals are all prospective. This is the same flaw as FDA AA 2007. But the vast majority of prescriptions are for treatments that came on market before 2007, indeed most are from more than ten years ago. we have a paper we are submitting shortly looking at speed of adoption: prospective release alone wld do little to improve the evidence base for everyday clinical decisions, for a decade or two. Even if IPD sharing will be incomplete, we need at least an ambition and expectation that material from the past two decades should be shared.

Lastly, to address some of the barriers raised:

- there are concerns about scares being driven by flawed analyses, such as the vaccine autism scares. This is a bizarre misreading of the vaccine-autism scare, which has been driven for a decade now by the widespread popular claim - among anti-vaccination activists - that drug companies, regulators and the research community are conspiring to withhold information about the risks and benefits of treatments. It is very peculiar that anyone should suggest we try to fix this problem by withholding information about the risks and benefits of treatments from patients and the public. In reality, public trust in medicine and the regulatory process is enhanced by transparency, not by greater secrecy. with respect to impact on doctors of poor analyses, it is very common that poor quality observational studies are conducted, and published, they are managed reasonably well in the academic ecosystem before they have an impact on practice.

- as a matter of research needed, we need some work on what patients’ expectations were on previous consent forms, and survey of the precise wording of previous consent forms.

- we need people to post protocols before they conduct analyses, but we also need to avoid double standards: industry often do poor analyses of their own data, breaking protocols, etc: they shld post protocols and SAPs, for all trials, just as secondary analysers should.

- it is important that exploratory analyses are permitted by whatever access system is in place. the key is that people don’t set out to conduct an exploratory analysis, then present their results - as happens, sadly, in both academia and industry - as if they are confirming a prespecified hypothesis.