“True science teaches, above all, to doubt and to be ignorant.”
-Miguel De Unamuno
Science writer Carl Zimmer posted a series of tweets the other day from David Dobb’s talk on science communication. They were all great points, but one – about exploring the darkness – stuck with me in particular:
The problem being highlighted was the tendency for popular science/psychology books to overstate the certainty of findings, and indeed the certainty of science itself. Such books miss the point that science is far more often about uncertainty, and stating what we don’t know – being willing to admit that we’re exploring in the dark.
This certainly strikes a chord for me in terms of evidence based medicine. A lot of the time, what we’re really talking about in EBM is lack of evidence, or unclear evidence. Rather than being the girls who get to rubber stamp treatments as ‘best yet’, high fives all round, far more often we’re the debunkers or the Cassandras. We’re the annoying ones in the corner shaking their heads and saying “I think you’ll find it’s more complicated than that…”
This happened to me quite recently, when the Horizon documentary about mHealth and the quantified self movement was on. Lots of people on twitter were excitedly linking to different apps and start-ups, and talking about the revolution in health care that was starting. And I was in the corner, saying “hmm….” and generally pouring cold twitter water over everything1. The irony is, I bloody love mHealth and quantified self stuff. I work on health technologies, write about them here, and currently have not one, not two, but four apps on my phone for monitoring various things (sleep, exercise, mood and ‘habit formation’, if you must know.) But that’s just the thing. It’s not about what I love, or I like, or even what I personally find helpful. It’s about the evidence. And often, the evidence just isn’t there.
Often in research you don’t get to be the enthusiast, or the optimist. You get to be the skeptic, or the debunker. Hey, you know that awesome new treatment, that involves something lovely and wonderful and which loooooaaads of people think could Change. The. World? Well I’m here to systematically review the evidence base and tell you there’s no way we can support rolling it out in services yet. I’m here to highlight that the theoretical basis of your intervention is incomplete at best, obviously flawed at worst. I’m here to run the clinical trial that tells you it just doesn’t work.
Sadly, this is what research is all about. Hi. I’m Dr Killjoy.
Research, and evidence based medicine, is about what works. In practice, this often means what doesn’t work, how it doesn’t work, and by what massive margin of failure it didn’t work. Lots of people don’t seem to get this and think just saying that there’s “evidence” is sufficient, without paying attention to what the evidence actually says (cough, politicians, cough). For example, I’ve had some rather awkward conversations with service providers who believe that research itself is ‘evidence’ that something works, rather than the results of that research being the evidence – and which neglects the rather disturbing possibility that the evidence will show something doesn’t work, or worse that it does more harm than good, and you won’t find what you wanted at all.
I must acknowledge that in these cases we should remember that health researchers have the luxury of being wrong. If we find a treatment, or parts of it, don’t work, we can still put that in our funding report and publish a paper about it and even use it to develop an entirely new grant to look at something else (or look at the same thing, but differently – the phrase “needs more research” is ubiquitous). I think this is a privilege – the ability to do the work, to expand the evidence base, without being afraid of judgement if a treatment fails. It is a luxury that our colleagues in services often won’t have, and I think it’s our job as academics to highlight why it should be their privilege too, why learning from mistakes is still learning, why finding out something doesn’t work and for what reasons should be as valuable for progress as discovering some new wizzy treatment that cures depression while doing all your ironing and feeding the cat.
For me though, the people I am trying to benefit most are not the providers, but the patients. Anyone who works in treatment evaluation will know what’s it like to talk to a patient who has just worked really hard to try a new treatment, only to report that it didn’t work, that they didn’t like it, that it didn’t achieve the things they’d hoped. I think it’s important to remember that when we trial a new treatment, we are essentially taking sick, tired people, and making them Do Stuff. We make them exercise, or make them read a self-help manual, or take a new pill. And we tell them it might make them better. We go to the sick and we make them invest their own time, energy and hope into trying something new.
And often, it doesn’t work2.
And when it doesn’t, we owe it to those people who may feel let down or disappointed or fed up to say that it didn’t work, to say that’ll we learn from that mistake, and to say we won’t take any other tried, poorly people and make them do the same things. Of all the things that patients have to deal with, I think false hope is one of the most heart-breaking. So if I have to debunk otherwise lovely sounding treatments, if I have to always be the pessimist and the skeptic, then that’s ok. I’m happy to be Dr Killjoy, if it means I’m telling people the truth, and if I’m making sure that we don’t waste the time and energy of people who might not have much of either.
I want to point out that I think this is different to being a cynic. Being a cynic is dull. It always annoys me when people who proclaim they love science or skepticism seem basically to only like telling everyone else they’re stupid, and pointing out anyone who ever said anything was wrong. As I said at the beginning, research and science are about exploring the darkness, not embracing it. I think being willing to examine the evidence, to pull assumptions apart, to critique and dig and say when something has failed or isn’t working, and to still think this is making things better, that we can learn and improve, is one of the lovely things about research. The audacity of doubt, perhaps.
1. There are various challenges facing mHealth – is it only useful for a subset of tech-savvy, motivated people? Is there potential for negative effects caused by self-monitoring, either on behaviour or through provoking anxiety? Is self-monitoring overly burdensome? And this is before we get to issues about privacy and data protection, and who actually owns the information you log in an app. For an excellent summary of the issues around apps and medicine, see Margaret McCartney’s BMJ piece.
2. This paper reports that new treatments tested in trials are better than existing ones just over half the time, and often not substantially better. I saw some commentary on this that seemed to suggest it called trials into question – why are we doing them if they never show positive results? As hopefully you’ll get from what I’ve said above, I think this entirely misses the point – it is exactly the purpose of trials to test and demonstrate whether something new is better or not.