What does ‘implementation’ mean in research? Or, finding out why good ideas can be badly executed.

Mental health tweeter extraordinaire @Sectioned_ caught my attention the other day when discussing a recent study featured by the Mental Elf blog (you can read the original post about the study here). There’s a storify of all the tweets here.

I got myself into a muddle by responding to one comment in a confusing manner, and then taking 4 more tweets to try to explain myself properly.  The original tweet by @Sectioned_ was:

“A good idea stymied by poor execution” Yes: seems to me studies often do half-hearted stuff that won’t work in patients’ lives.”

And my response:

“as a talking therapies researcher, ‘poorly implemented’ doesn’t necc mean we were half hearted – implementation is v hard to do”

In retrospect, I think the use of the word ‘half-hearted’ probably hit a bit of a nerve. In my experience, researchers usually try very hard indeed to make sure that interventions they’re testing are properly implemented, but this can be extremely difficult. After some further attempt at explanation,  @Sectioned_ replied:

“I’ve concluded I don’t understand what you mean when you use the word “implementation”.” (1)

This set me thinking about 2 things: 1. Which other words do I use so often at work that I forget  they might need defining or contextualising a bit when talking to other people? 2. What *do* I mean by implementation?  At work I use the word implementation at least 2 or 3 times a day on an average work day, and probably about once every ten minutes if discussing specific studies.  So in theory I should be able to explain exactly what it means…

What does implementation mean to researchers?

On twitter, my response was to unimaginatively nick a different word that @Sectioned_ had used:

“Maybe what you mean by ‘execution’? I’m mean how much actually happened irl, did patients get it, did clinicians change practice”

In this sense, implementation just means “what actually happened? How closely did what actually happen map onto what we intended to happen?” This is what we try to find out by doing a ‘process analysis’ in a trial. These are usually qualitative studies, for example interviews with patients and clinicians who took part.  Part of the motivation for this is to assess protocol fidelity – to what extent did what we say would happen actually happen, and so can we say we’ve evaluated what we said we’d evaluate?

In some ways this is too simplistic though.  If we find out that nothing quite happened the way we planned, this doesn’t automatically mean we didn’t find out what we hoped to find out, particularly with ‘complex interventions’(2). If you find out that the intervention didn’t happen as expected, this tells you important things about the feasibility of the intervention. It might mean that the intervention *can’t* work the way it was hoped, or encounters too many barriers to being applied in real life. For example, in the original study from the Mental Elf blog, finding that local clinicians didn’t engage fully with care planning may reflect a very real problem with clinician capacity or knowing which staff should or could be involved, which would damage the chances of the intervention working in real life. In complex intervention trials, finding out that something can’t be implemented (feasibility) is almost equivalent but different to finding it just doesn’t work (effectiveness). “Does it work?” and “Does it work in patients’ lives?” are therefore two different questions, but both are important to evaluating a new treatment or way of delivering a service. Arguably, if something can only work under ideal conditions, but doesn’t play out in routine care, then we may as well say it doesn’t work at all.

Even this is too simplistic though. The response to this might be: but can you find out why it didn’t work, and change things so that it does?  In a trial, you might try to use a theoretical model of implementation, to look at what the specific barriers were, and whether you can do anything about them – for example, can you design or modify an intervention to support clinicians to better engage with the new care planning? This means you’re then looking at designing new interventions to support the delivery of new interventions. At this point my head explodes.

To try to take this back to the beginning –  the question of whether something simply isn’t feasible or whether it could have been executed better in real life is very difficult (3). Academics are trying to understand the problems associated with delivering new interventions in every day health care, and trying to build these evaluations into regular trials of whether or not something works – what we call ‘implementation science’.  The points that @Sectioned_ made in the storify illustrate very well some of the challenges to implementation, and the thorny question of whether we can say something worked or not when it was delivered in a way that almost guaranteed it would fail.

Notes

(1) This is only one small part of the discussion @Sectioned_ had as a whole, so I definitely encourage you to check out the full storify.

(2) Complex intervention meaning those that have lots of different components, like trying to encourage changes to diet through exercise and therapy, or involving multiple different professionals in managing a patient, as opposed to testing a single pill or a single change

(3) There are lots of other issues here which I could go into – the difficulties of capturing these problems for example, and even the reluctance to do so, for fear of undermining the trial itself or appearing too critical of a service that has generously let you in to do the research. There’s also the issue of whether trials, which are by design artificial enterprises, can ever truly capture what will happen in real life. @Sectioned_ also raised the issue (check out the storify!) of interventions being delivered which are obviously dull or pointless to patients, and queried whether academics ever ask patients in advance – this is very relevant to PPI or ‘patient and public involvement’, a topic very close to my heart.

Advertisements
Aside | This entry was posted in Thinking about research and tagged , , , . Bookmark the permalink.

2 Responses to What does ‘implementation’ mean in research? Or, finding out why good ideas can be badly executed.

  1. I think this very accurately sums up the tensions that exist between effectiveness and implementation research. Two sides of the same coin but for a long time attention has been focussed on the former. In the longer term there are other sustainability issues that play out. What happens when the research team leaves, what happens to the delivery of an intervention when services are restructured, is the delivery of an intervention diluted over time etc. I’m not sure we’ve got our heads around this aspect of implementation…. yet. Can I also wholeheartedly endorse your point about the role of PPI. It is so important that we carry out research that is important to patients and that we work in partnership with patients to ensure that the research we do is of the highest quality and that services meet patient need. For more information about PPI, please see http://www.population-health.manchester.ac.uk/primer/ and http://www.invo.org.uk/

  2. Pingback: Some meandering thoughts on scientific studies | Sectioned

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s