Mental health tweeter extraordinaire @Sectioned_ caught my attention the other day when discussing a recent study featured by the Mental Elf blog (you can read the original post about the study here). There’s a storify of all the tweets here.
I got myself into a muddle by responding to one comment in a confusing manner, and then taking 4 more tweets to try to explain myself properly. The original tweet by @Sectioned_ was:
“A good idea stymied by poor execution” Yes: seems to me studies often do half-hearted stuff that won’t work in patients’ lives.”
And my response:
“as a talking therapies researcher, ‘poorly implemented’ doesn’t necc mean we were half hearted – implementation is v hard to do”
In retrospect, I think the use of the word ‘half-hearted’ probably hit a bit of a nerve. In my experience, researchers usually try very hard indeed to make sure that interventions they’re testing are properly implemented, but this can be extremely difficult. After some further attempt at explanation, @Sectioned_ replied:
“I’ve concluded I don’t understand what you mean when you use the word “implementation”.” (1)
This set me thinking about 2 things: 1. Which other words do I use so often at work that I forget they might need defining or contextualising a bit when talking to other people? 2. What *do* I mean by implementation? At work I use the word implementation at least 2 or 3 times a day on an average work day, and probably about once every ten minutes if discussing specific studies. So in theory I should be able to explain exactly what it means…
What does implementation mean to researchers?
On twitter, my response was to unimaginatively nick a different word that @Sectioned_ had used:
“Maybe what you mean by ‘execution’? I’m mean how much actually happened irl, did patients get it, did clinicians change practice”
In this sense, implementation just means “what actually happened? How closely did what actually happen map onto what we intended to happen?” This is what we try to find out by doing a ‘process analysis’ in a trial. These are usually qualitative studies, for example interviews with patients and clinicians who took part. Part of the motivation for this is to assess protocol fidelity – to what extent did what we say would happen actually happen, and so can we say we’ve evaluated what we said we’d evaluate?
In some ways this is too simplistic though. If we find out that nothing quite happened the way we planned, this doesn’t automatically mean we didn’t find out what we hoped to find out, particularly with ‘complex interventions’(2). If you find out that the intervention didn’t happen as expected, this tells you important things about the feasibility of the intervention. It might mean that the intervention *can’t* work the way it was hoped, or encounters too many barriers to being applied in real life. For example, in the original study from the Mental Elf blog, finding that local clinicians didn’t engage fully with care planning may reflect a very real problem with clinician capacity or knowing which staff should or could be involved, which would damage the chances of the intervention working in real life. In complex intervention trials, finding out that something can’t be implemented (feasibility) is almost equivalent but different to finding it just doesn’t work (effectiveness). “Does it work?” and “Does it work in patients’ lives?” are therefore two different questions, but both are important to evaluating a new treatment or way of delivering a service. Arguably, if something can only work under ideal conditions, but doesn’t play out in routine care, then we may as well say it doesn’t work at all.
Even this is too simplistic though. The response to this might be: but can you find out why it didn’t work, and change things so that it does? In a trial, you might try to use a theoretical model of implementation, to look at what the specific barriers were, and whether you can do anything about them – for example, can you design or modify an intervention to support clinicians to better engage with the new care planning? This means you’re then looking at designing new interventions to support the delivery of new interventions. At this point my head explodes.
To try to take this back to the beginning – the question of whether something simply isn’t feasible or whether it could have been executed better in real life is very difficult (3). Academics are trying to understand the problems associated with delivering new interventions in every day health care, and trying to build these evaluations into regular trials of whether or not something works – what we call ‘implementation science’. The points that @Sectioned_ made in the storify illustrate very well some of the challenges to implementation, and the thorny question of whether we can say something worked or not when it was delivered in a way that almost guaranteed it would fail.