Trials 2.0? How trials are evolving to evaluate technologies

I’m in a sulk. I’m currently missing the Medicine 2.0 conference. All the cool kids are there…

Fortunately, I’ve been able to follow it on twitter (#med2). One of the interesting things that’s coming up is how technology is pushing researchers to expand their traditional research designs. Randomised Controlled Trials, or RCTs, are typically described as the bedrock, the gold standard, the big cahoonie of evidence based medicine (maybe not that last one…). I think the principle of RCTs is beautifully elegant. To find out if a treatment works, you split a bunch of people randomly into groups – randomly so that any possible biases aren’t systematically different between them – and you see which group does better. That’s it! There’s a wonderful book about the history of trials called Taking the Medicine which I highly recommend if you want to know more.

So, RCTS are all about elegant simplicity in design. But in practice,  they can be huge, slow, lumbering things. One of the most common criticisms about trials of technology is that by the time the trial is finished, the technology has changed vastly. Technology trials end up being like Jurassic Park – huge, expensive and full of things that are extinct.

Increasingly, then, we’re looking at ways of researching technologies that can better capture what’s going on. An example is the move to doing ‘Research in the Wild’. Tech designers use the phrase “in the wild” to talk about doing research actually with the end users of a product, in the environment in which they’ll use it. Essentially,  this is about doing ethnographic research, to find out what actually happens when people try to use a technology in their everyday, messy, busy lives – rather than simplicity, we’re trying to capture the complexity of what goes on. (NB. It’s worth noting this doesn’t only apply to patients – understanding how clinicians and health professionals adapt, or fail to adapt, to use technologies fits in here as well.)

An interesting aspect of this is the idea of ‘bricolage’ vs control. In a typical trial, we want to keep the treatment as controlled as possible, to make sure that we’re evaluating exactly what we said we would. But with technology, there is increasing recognition that technologies are embedded in everyday life – and when they’re not, people do a bit of DIY around how they use them. They hack them to make them fit their own needs and circumstances. This is ‘bricolage’ , defined by the ATHENE project as “pragmatic customisation” , ie. the way we adapt how something is supposed to work into how it actually works for us. Understanding this requires different ways of collecting data, such as observation and ethnography, and also poses some challenges to typical ways of controlling for variation in trials.

Bricolage also has exciting potential though. A constant struggle in trials research is how to take something generic that you can hand out to a population (such as ‘depressed patients’) and still have it be relevant to individuals themselves (a ‘depressed person’ – who might also be old/young, male/female, like the colour red/blue, tend to get up early/late etc. etc.). Bricolage suggests a process of personalisation ‘in the wild’ – and hints that we should think about making technologies more customisable, so people can adapt them to suit their own ways of using them.  This is also arguably a more collaborative venture than typical trials – this isn’t a case of a researcher giving a treatment to a patient, but of the patient co-producing the treatment, as it applies to them, during the trial.

This doesn’t help much with our original point though, namely that trials are so sl-o-o–o—w. Some people have argued that trials need to be more rapid, for example releasing results earlier (for post publication review), but other commentators point out this risks misleading results. We don’t want to step so far away from the traditional RCT design that we end up questioning whether our results are valid.

An alternative might be to try to boil down the components of technological interventions, so we can study those core features rather than studying the surface level appearance or content. If you imagine a series of trials about different meals, it might seem useless if every time the trial is completed about one type of meal then the world’s chefs have moved onto cooking another (probably, if Masterchef has taught me anything, involving some kind of joux. And a water bath.) But if we’re interested in the health impact of these meals, and we can reliably measure the salt and sugar content of the recipes, then we can still learn a lot, and apply this learning in future.  This way, we can develop an evidence base about the ingredients of successful technologies, even if the technology itself is changing. As an example, I gave a talk recently which argued that the degree of ‘connection’ and ‘collaboration’ offered by technologies was crucial to how it was experienced.

Other suggestions have been to employ the N =1 trial design, and I saw some tweets referring to this from #med2. Like many people, my knowledge of N=1 is restricted to knowing that most researchers just give a haughty sniff and say “isn’t that a posh way of saying anecdote?” so I’m very intrigued to hear more.

I think a lot of these issues are relevant to health care beyond technology – with the increasing focus on self care of chronic conditions for example, more and more treatments will be deployed ‘in the wild’ and we will need to study them in context to really understand  how they work. This puts health technology researchers and clinicians in an exciting place though – technology is making us get stuck into these problems now, and helps us start thinking about creative ways to solve them.

This entry was posted in Technology, Thinking about research and tagged , , . Bookmark the permalink.

2 Responses to Trials 2.0? How trials are evolving to evaluate technologies

  1. Pingback: Dr Killjoy & The Exploration of Darkness: An Evidence Based Medicine Adventure! | saraheknowles

  2. Pingback: Express, Test and Cycle – Innovation as the Bench Work for RCTs | saraheknowles

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s