Express, Test and Cycle – Innovation as the Bench Work for RCTs

Yesterday I was at MindTech launch conference in London. I was wearing two different shoes, but despite that had a lovely day.

One issue close to my heart came up at the conference a lot, and that was whether traditional evaluation – randomised controlled trials (RCTs) – are inappropriate in the mental health technology field, as they can be too slow and unresponsive to changes in technology itself (I’ve written previously about the challenges to traditional evaluation posed by technologies). Ewan Davis put this beautifully by saying the NHS needs to learn to fail faster – rather than use its current evaluations, which might work out pretty quickly that something doesn’t work, but they’re tied in to sticking with it until the bitter end. Designers refer to this process as “express, test, and cycle”. You design (express) an intervention, test it with users, identify problems or barriers, and cycle this back into a new expression. In a trial, we stick with the original expression, because that’s what we’re trying to test, and mucking about with it would mean we don’t know what it is we’re evaluating. Clearly, if you want to cycle through development stages, RCTs aren’t the place to do it.

I then discussed this further with Mark Brown on twitter, who emphasised this issue that focusing on trial evaluation could stifle innovation.

Now, I’m a trialist. And I’m a bit obsessed with them, and how amazing they are, and the benefits they’ve brought to medicine. I very much believe they are the gold standard of evaluation. But as Mark has commented, they’re not the only form of evaluation. I’m very aware that I might be guilty of encouraging the view that only  RCTs count, because whenever we talk about evaluation, I will tend to leap to “has there been an RCT? If not, why not?”. I get very suspicious when people seem to want to avoid trials, as I think every treatment should be tested to this gold standard. But what I realise people like Mark and Ewan are trying to say is that when you want to be creative and innovative, you need to think more flexibly and responsively than RCTs will allow.

On the train home, I started to think about whether the problem for innovation in services like mental health comes about because we’re not used to ‘human subjects’ (forgive the outdated language) being involved in early stages of development. We’re familiar with the idea of the development work done at the bench or in the lab in biomedical science, and then only once it gets to the human testing stage is it evaluated in full scale RCTs. But this doesn’t work in technology, and in innovation of services on the ground. The whole point for them is that the development and cycling happens with real people. But if we think that involving people or services = doing an RCT, we’re stuck.

In an RCT, the intervention or product has to be tightly controlled, both to preserve the integrity of the trial and also for ethical or governance reasons. In a trial, we have to submit ethics amendments every time even the slightest change is made – I can easily see how this would stifle attempts to rapidly cycle through design.  Similarly, funders expect us to outline exactly what we will do, when and how, which again doesn’t allow for scrapping bad ideas and starting afresh (ie. failing faster). Funders would say that they need to know in advance their money will be spent wisely, but locking people in to specific avenues of work creates a situation where if you have a bad idea you have to stick with it, which seems like the worst use of money possible.

I wonder if this is somewhere that PPI can play a role. On a very practical level, it would avoid the ethics issues – you don’t get ethics approval for PPI as they’re not ‘participants’ as such (which is not to say PPI doesn’t have ethical issues or they’re not important, simply that it’s recognised that it doesn’t fit in the governance frameworks as a type of study ‘done to’ participants). This also might help in terms of defining what the outcome of interest should be – it might not be appropriate to immediately expect to measure clinical improvement, as you might not be sure what or how you’re improving things (or it may even emerge that it’s not clinical factors you’re interested in, but patient empowerment or quality of life). The outcome could in fact be level of user involvement, a measure of how much we’re building something with the end user in mind, rather than effect on symptoms or health – as Mark put it, what the end customer needs to trust and use your intervention.

I also wonder if this would help the image problem of researchers as collecting data without doing or changing anything – putting knowledge into action. There are already established calls to do this but the focus is on putting research into action at the end of the research process rather than in the beginning. So by the time we come to put anything into action, it must already have been fully evaluated. So how can you ever evaluate and adapt as you go? How can we exploit the learning, apply it and test it, and learn again, and feedback this continually back into services?

Another issue that came up at MindTech was the problem of duplication – there are now lots of start-ups in mental health tech and chances are someone else has already tried your idea. In traditional research, the first stage of any work is review the existing evidence. This typically means a review of RCTs, and if there haven’t been any you may start as if from scratch. This ignores the wealth of information that might be around in terms of early innovation. Perhaps the first step should be scoping what’s really out there, and identifying early start ups that can be collaborated with. This would make use of that untapped resource and could avoid duplication of effort. It would also formalise helping new or early innovators develop their work as part of the research process.

I want to be clear that I don’t think any of this replaces trials – it’s just recognising the different stages of work and evidence necessary to build the groundwork for a final trial evaluation. I still believe that trials are needed to answer the final question of ‘does it work?’, but I also see that they’re too blunt a tool for answering ‘How can it work?’, ‘Why does that not work?’, ‘what do people want it to do when it’s working?’. Those are the questions where innovation is needed, rather than evaluation.

So, in summary, I wonder if the key to allowing more rapid – and innovative – development in services is to realise that these are essentially ‘the benchwork’ phases of complex interventions. This is the early, hit and miss, oh-no-it-just-exploded vs my-god-I-just-invented-atomic-sausages time. It just happens that the bench, for us, is in the real world. This way we can have reflective and adaptive progress, with innovative development. Which, um, as an acronym spells ‘rapid’. See, I can’t help but come back to being a trialist – in trials, you ain’t nothin’ without an acronym 😉


  • Excellent comments on mental health innovation here and here from Mark and ClaireOT – though from a different event which had already used the name MindTech! (See – someone else has always  had your idea first…)
  • I also got to meet James Woollard irl yesterday – although we spent most of the day managing to miss each other. Importantly, I realised that even if you’ve been twitter pals for a while, it’s probably still creepy to tweet “What are you wearing” at someone…
  • The problems of RCTs are certainly recognised by academics, and alternative designs (which still keep the core features of evaluation, randomisation or controlled comparison) have been suggested, for example here. Methods of evaluation such as the N of 1 trial could be incorporated within the development cycles described above, so we could potentially still be collecting trial data alongside more rapid and flexible development.
This entry was posted in Mental Health, Technology, Thinking about research and tagged , , , , . Bookmark the permalink.

4 Responses to Express, Test and Cycle – Innovation as the Bench Work for RCTs

  1. Claire says:

    Interesting thoughts, thank you for sharing them. It’s really valuable to hear from researchers about what the parameters of this debate are, and I would like to explore them further. Sorry if any of my questions seem simplistic to you!

    How many trials (of any design) have been run that simply compare a control group (standard mentalhealth care) with a test group (standard care +digital innovation: which could be an app, a website resource, a social network etc etc)? Are standards of evidence such as Action Research or Ethnography that could complete these research cycles quicker than RCTs preferable to use for the reasons you list above? Are we developing methodologies for research in mental health innovation elsewhere?

    If we commit to the importance of this research, is it reasonable to expect SME/Social Enterprises to fund and carry out the research, or should this be done in concert with Universities and Trusts?

    As an entrepreneur health professional, I would love to know that the products I’m designing are clinically (or psycho-socially) effective. This is very hard to do to the standards of clinical evidence I would wish to use to guide my evidence-based practice, due to my isolation from clinical research communities as a SME/SocEnt, and due to the (financial, staffing) resources required to do such trials.

    How do we enable to partnerships in mental health innovation to form? Can this happen outside of Universities and Trusts? Is this potentially a role for the AHSNs?

  2. Hi Claire,
    thanks for commenting! There are trials that compare standard care to standard care plus a technology (we just completed one called REEACT that comapred standard care to standard care plus computerised CBT). I think the issue is whether you can say they looked at a ‘digital innovation’ – for example in REEACT the trial took 5 years and we evaluated the same programme, without update or modifications, throughout.
    I think Action Research and Ethnography would be great ways of framing innovation work, although I think there is still a sense in research that you should be observing something, not messing with it, and I think getting over this requires almost a culture change for researchers. I wonder if one way of approaching this is to boil down early on which aspects are fundamental to something and which are the modifiable bits which can be adjusted to local contexts or personalised – we could then say we’re evaluating the core components whilst recognising some aspects will be changed and evolved. I guess the challenge is knowing which bits are ‘core’ from the beginning!
    I actually think funding should come from Universities/Trusts – but this could create its own problems, for example Universities are very big on intellectual property now, and if they funded development work they might consider that they own the final product – I think working out how collaborations between social enterprises and Universities can be sustainable, profitable and fair for both will be the challenge.
    I think this does require partnerships, and I think it’s notable that social enterprises aren’t often talked about when Universities talk about collaborating with outside organisations (though this might my reflect my limited experience rather than an absence per se). My experience of the AHSNs (again limited to date!) is that they focus on NHS organisations rather than looking to social enterprises. As researchers we’re encouraged to seek collaboration with the big charities and with big names like Philips or Bosch – clearly there’s a need to look at collaborations with small or individual start ups. I’m not sure why this is neglected – maybe it’s seen as more risky or less prestigious, or perhaps there’s a focus on national-level organisations at the expense of local groups?

  3. Pingback: Connect and Reflect for #NHSTalkTech pledge @NHSchangeday | I/O

  4. Pingback: Go Your Own Way: Desire Paths, Design and eHealth research | saraheknowles

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s