When did it become trendy to start slating randomised controlled trials?

 In Thoughts

Dismissing Randomised Controlled Trials seems to be an increasingly popular thing to do these days – well, we are living in strange times…Last year Sally Cupitt (Head of NCVO Charities Evaluation Service) asked whether RCTs were the gold standard or just ‘fool’s gold’? A few weeks ago eminent professors Angus Deaton and Nancy Cartwright set out (in a suitably academic manner) their conclusions on the ‘limitations of randomised controlled trials’. Now NfP Synergy’s Joe Saxton has jumped aboard the anti-RCT bandwagon describing them as ‘another false dawn’ and ‘evaluation fantasy’.

Whilst it may make for a good blog post to challenge the growing awareness of and interest in Randomised Controlled Trials, it’s neither helpful nor accurate to dismiss their benefits so readily. Let’s look at some of the criticisms levelled at RCTs:

  • RCTs are ‘fraught with problems and complexity’ and Nesta’s guide to RCTs is 78 pages long so this must be true.

*Sigh*

Some RCTs are complex (though if we are tackling complex challenges in the charity sector –see point 4 below – should we be so scared of complex methods of evaluating impact?). Some too are complicated. But the existence of some poor practice doesn’t justify dismissing an entire method. That’s the sort of insidious thinking that has led some thinkers and politicians to characterise all charities as; poorly run, self-interested, inefficient ‘sock puppets’ of the State. Surely we don’t wish to subscribe to that type of logic?

There is a tendency, as with many specialist techniques to shroud them in complexity and technicality which serves the interests of experts and prevents their wider application. This does not make the method complex or problematic. It merely highlights the need for better understanding, support and application.

  • Nobody mentions double-blinds.

Apart from the fact they do if you are inclined to delve into the academic literature, the real issue here is that it is a red-herring. That’s the sort of sentiment which prevents RCTs from becoming a mainstream evaluation tool. It doesn’t matter if you don’t run a double-blind trial. Let’s be pragmatic about things – there are things which are essential in running trials and things which are nice to haves. In fact, we could talk about experimental methods as a continuum on which RCTs is one approach. It might be the one we aspire to – as reasons of complexity, scale and proportionate costs can make it impractical – but it’s not right for every occasion. Not even the most ardent trial evangelist would consider suggesting that. But there are plenty of instances where they can significantly enhance current practice.

  • It is unethical to withhold an intervention from some (randomly selected) people

This one just makes me laugh and cry in equal measure as it displays an absurd degree of selection bias and wholly misrepresents the very notion of understanding what works. Firstly, if you know something is going to provide a benefit to a particular group then do not (ever!) waste time on a trial. Just do it. Give it to the people as soon as possible.

But if, on the other hand, you think an intervention is going to be effective but you’d like to know for sure if it works a trial could give you confidence in the result. When we test new interventions we don’t know if they are going to work, we are testing them. Sometimes the things we ‘know to be true’ turn out not to be when properly tested. A classic example of this is the treatment of serious head injuries with steroids which had been the standard medical practice for 30 years, until someone tested it through an RCT and immediately found steroids were killing people. And so what was known to be true changed…overnight.

Then there’s the convenient overlooking of the fact that running a pilot is considered perfectly reasonable – indeed the post goes on, without a hint of irony, to talk about an intervention in one South London school…but why was it run in only one school? Was it ethical to deprive students in other schools of this intervention?

Somehow it’s perfectly fine to run a pilot but unethical to run a trial. Hmmmm…

  • Measuring single variables isn’t realistic when charities are often tackling complex challenges

This is more of an argument for RCTs than against them as far as I am concerned. It’s precisely because of the complexity that understanding the impact of single variables is useful. RCTs allow us to separate out the other ‘background noise’ and determine just what difference the intervention makes. Of course that doesn’t mean it’s necessarily sensible to use trials for longitudinal studies where the impact may take place over a generation. RCTs, like every other evaluation method needs to be used appropriately.

What we have found in our work is that small things can make a big difference – changing the text in how we communicate, altering the way information is presented, how we ask someone to do something or the way we design a service to be more customer-centric. All these things can make a significant difference to our impact and I’d suggest it’s our moral duty to put our assumptions to the test.

  • The sample size needs to be big

Yes, it does. So it’s not always going to be appropriate to run a trial in every circumstance – and it’s possible in these instances to use experimental methods without running full-blown trials. It’s also very valuable to recognise the limitations of what we are doing. No one is saying that we must run trials on everything…but we must avoid over-confidence in attributing change to our interventions without considering other external players and environmental factors. To dismiss RCTs simply because they don’t work in every instance is frankly ridiculous.

Contrary to what is suggested there are innumerable instances where RCTs will work, are not overly complex or prohibitively expensive and wholly achievable for a great number of charities. (Indeed where scale or resources are an issue trials might even be a catalyst for collaborating to share learning and increase efficiency).

‘Why have an evaluation standard that is applicable to very few o very few of the interventions that charities make?’

Ummm…because I thought we were in the business of trying to raise standards and quality in the charity sector.

There needs to be an intelligent understanding of what RCTs are, how they work and when it is appropriate to use them. If the starting point of those seeking to support improvement of evaluation and impact assessment in the sector is to rubbish an entire method simply because it’s not a panacea for all the sectors ills, what chance do we have of that?

And if anyone would like some suggestions of practical ways in which RCTs could easily be used by charities feel free to get in touch and I’d be more than happy to oblige.

Recommended Posts
Showing 3 comments
  • Bettina Ryll, MD/ PhD
    Reply

    SIGH- likewise.

    The discussion around RCTs and their value or rather lack-therefore begins to show an annoying lack of specificity.
    The intention behind RCTs in medicine were to move away from doctors’ experiential knowledge (or rather: opinion as we know, often that ‘knowledge’ didn’t hold true) and were designed to detect small to medium differences between treatments, all with the intention to protect patients and deliver better medical care. Over time, RCTs became the holy grail in the religion of evidence-based medicine and there is this interesting correlation from remoteness to treating patients and insistence on ‘scientific rigour’. Someone arguing his or her soul out for blinding? randomisation? placebos? especially in a situation like cancer or other desperate conditions? You bet, not someone seeing patients- or at least not the dying ones.

    You argue, RCTs are large. Well, they WOULD be large if only used in the appropriate context, namely a small expected effect size. However, I strongly recommend you review the history of Melanoma trials- the last 7 years will be sufficient- where we have seen Phase 3 trials with HUGE gaps in overall survival (so even pleasing the hard-core trialists with hard endpoints) where our patients were sacrificed on inferior control arms over and over again. We knew 20 years ago Melanoma was resistant to chemotherapy (I learned that back then at Med school already) but it was the ‘gold standard’ (of desperation) therapy, so we got yet another set of ‘gold standard’ RCTs. And not ONE but a series of them- huge effect size, small RCTs, people predictably dying on inferior controlled arms. Nicely waved through by ethics committees as nicely designed RCTs. We have now had an oncologist apologising publicly for putting Melanoma patients on chemo (ESMO 2016), we had desperate Melanoma oncologists pleading for humane trial designs (ASCO 2014).
    I totally get the shiny appeal of scientific rigour- generations of medics have been trained on it at Med School and I was naive enough to believe that this was not only a tool to protect patients from my own way lack of experience but also the ultimate way to humane medicine. Unfortunately, what it took to wake me up was my own husband entering one of those smart, grand RCTs, unfortunately one with a clearly inferior comparator (MEKi vs DTIC,2:1, Ph3, then GSK) as we already knew from the Phase 1 which patients lived. And which died…. The days waiting for the randomisation result (Friday afternoon till Tuesday morning) are among the worst and most memorable of my life, just up with the days just before his death less than a year later.
    It is THIS type of RCTs that is upsetting and that today’s patients are no longer willing to be subjected to- emancipation has also reached medical practice. The internet makes medical information accessible to a previously unknown degree and offers collaborative opportunities that actually allow patients to enforce their wishes. If the ones subjected to those kind of trials object, surely the scientific community should listen up? Unless this was never about the patient to start with.

    RCTs were invited to save lives, to have better medicine, not to sacrifice lives in the name of Science. We need humane medicine, so maybe we should be less blended by the gold of ‘gold standards’ and go for a less showy, every-day, pragmatic approach to medical research- less shiny, but more thorough.

  • Hamish Chalmers
    Reply

    Great post (I was directed here from the Alliance for Useful Evidence). You have raised and dealt with the most frequent straw men criticisms about RCTs well. Thank you. I research in the education sector and am flabbergasted by the ignorance over what constitutes an RCT by people who are nonetheless extremely happy to rubbish the entire methodology.

    One particular bugbear of mine is the ethical double standard in education where teachers can choose to use pretty much whatever they like in their classrooms (whatever the state of the relevant evidence is, and potentially in blissful ignorance of the possible negative effects of any given approach), yet teachers who want to compare alternative approaches in a properly controlled experiment to actually help reduce that ignorance are considered to be acting unethically. You might be interested in a post I wrote about this issue here: https://l3xiphile.wordpress.com/2016/09/14/the-double-standard-of-ethics-in-educational-research/ This, of course, is as much a unique feature of RCTs as is blinding, or scale, or cost … etc etc etc – i.e. not unique to RCTs at all.

    I like your analogy with pilot studies. The issue reminds me of the doctor who says that he must have ethics approval to give half of his patients a new drug, but needs no such approval to give all of his patients that drug. The logic employed here in the education field, if taken to its logical conclusion, is that there should be no teacher autonomy at all. All teachers should work from identical text books, using identical methods, for fear that individual innovation of any sort would necessarily deprive some students of the new technique. Of course, innovation coupled with rigorous evaluation is how we make progress. The RCT is a strong tool in helping us to meet that ends.

  • David Morris
    Reply

    Speaking, in a personal capacity, as someone who heads a team of researchers who evaluate (non-medical) public policies and who train local authority staff in how to design and conduct evaluations, claims that people will rubbish RCTs as an approach and/or be ignorant of double standards over piloting etc. are not backed up by my experience. Most staff we come into contact with are aware of the limited data/evidence underpinning this work, want that to change, and have a broad understanding of how RCTs can help.

    Perhaps the reasons why RCTs are gaining less traction outside of medical research is more to do with the practicalities of trials than their theoretical basis. We’ve just completed some Nesta-funded research to investigate whether we could design and implement an RCT of business support activities. Designing a theoretical trial was actually quite easy but the implementation was impossible due to a range of factors beyond the delivery staff’s control including: lack of access to up to date outcomes data at the relevant spatial scale; the absence of post-delivery funds that could be used to track impact into the future; the fact that many services are now delivered via a network of sub-contractors, each with their own delivery contract, who cannot retrospectively and randomly be denied the chance to deliver support and hence earn a fee; Whitehall/European funding rules essentially saying you are contracted to deliver X outputs in this way over this time period etc.; and a simple lack of manpower to administer a trial.

    A lot of RCT advocates seem to come from academic/medical backgrounds where unimpeded access to large datasets, the ability to conduct national level (hence large sample size) research over a lengthier period of time, complete freedom in design and delivery of research etc., the ability to get postdocs to work for you for ‘free’ to do all the trial legwork are more likely to be in place. A better understanding of the practicalities of undertaking monitoring and evaluation in local public sector organisations is needed if we want to see more RCTs – the language and approaches used by the Behavioural Insights Team are a good example of trying to make this bridge.

Leave a Comment