The debate about the Lancet study on CBT for psychosis is still going strong in the twitter-sphere. A lot of the criticism has focused on the statistics reported in the piece, with debates about standardised versus unstandardised effect sizes, lack of sensitivity analysis for confounds, choice of follow up point to report and so on. Prof James Coyne has even offered a $500 wager to the study authors to justify the use of the effect size they reported in the study. What’s interesting is that no-one (as far as I’m aware) is saying the numbers reported are wrong – rather, the debate is about which numbers are chosen at the expense of which others, which analyses were emphasised or excluded, and whether the chosen way of reporting certain findings is the most ‘fair’ way to do so.
People outside of science and academia might look at this and think it confirms the old saying of “Lies, Damned lies and Statistics.” I can imagine them thinking “See? This is why you can’t trust a bloody thing they say, because the numbers say whatever they want and they all disagree anyway!” They might also be thinking “Blimey, these ‘professionals’ sure like taking pot shots at one another.” Now, the latter might be true, and I don’t always like the tone with which academics criticise one another (see this great post by Ingur Mewburn, aka The Thesis Whisperer, on whether academia implicitly encourages people to be – and, yes, this is what it says – ‘flaming assholes.’)
But the first part definitely gets it wrong. The fact that we tear these things down, that we look so hard for problems, that we are first and foremost critics of the work we do – I actually think if we didn’t do this we should be considered untrustworthy.
The way I see it, research is like building the foundations of a house. But you’re building it out of pebbles. Not even bricks. Pebbles. This might sound daft, but seriously, this is what it’s like – because even the huge trials we run, or the big systematic reviews, they always only give us a little bit of the information we need. So you’re putting together all of these little pebbles of work, trying to see how they fit, and most of all trying to check that they are stable. Because we sure as hell don’t want that house coming down on the people inside. The people who might already be ill or injured and relying on us to take care of them.
Research is all about finding the gaps. I know it must be infuriating that almost every piece of research ever ends with the phrase “more research needed.” I’m sure there are people who think “Well, Johnny Research-A-Lot, you’re hardly going to admit it’s all been sorted, please delete my now unnecessary profession, are you?” But I genuinely think that the majority of the time ‘more research needed’ is true, and in some ways it’s exactly what the research was for. You’re picking up a pebble, examining it, adding it to the wall. Then you stand back and go “Hmm. There are still some holes there.” The conclusions we draw in health research have consequences, and that means we have to be hyper vigilant about whether those conclusions can take the weight of the decisions that rely on them. In the case of the Lancet study, the findings will add to a body of evidence that could be used by psychiatrists and patients to decide which treatment to pursue, used by trusts to decide which therapies to fund (at the expense of which others), or used by bodies like NICE to make national recommendations about which treatments should be available for patients. So we have to be as sure as we can be, and we have to know which bits of the foundation are weak or need more support. We have to find the gap.
This, I think, is why researchers often seem to spend an inordinate amount of time critiquing research – and each other. We know how important those foundations are, and if we think someone is missing a gap or overstating the strength of the foundations, then we want to shout about it. On the whole I think this is a good thing, though if it becomes bullying or aggression, and ends up dissuading others from pointing out gaps, then it becomes a very bad thing indeed. As much as I believe in the need to criticise and to pull things apart, I think it’s important to do so in a way that is still, in the end, encouraging. I like the phrase “truth comes from argument amongst friends”, and ideally I think that’s what we should aim for in debates like this.
Update 17/2/14: Blogger @Huwtube raises interesting points here about whether the defence of CBT in cases like this is due to feeling pressure to defend psychological therapies more generally in the face of more mechanistic approaches to treatment. I wonder if this speaks to the idea mentioned in my previous post that psychological or talking therapies seem to often be painted as the automatic “good guy” in comparison to medication.
Update 19/2/14: I was alerted on twitter to this article by Trish Greenhalgh where she lambasts researchers for trotting out the “more research is needed” phrase as it can mean we’re failing to learn from obvious lessons or show an unwillingness to give up on poor ideas. I think she makes some excellent points, but without wishing to incur professorial wrath (the *worst* kind), I think I agree more with some of those commenting on the piece who highlight for example that it might be better to rephrase this as a need to encourage the conclusion that “better research is needed”, and who discuss the role of trials in helping us identify what this may be (Prof David Colquhoun raises some interesting issues.). I do think it raises an extremely important point though about how we decide – and perhaps who decides – when we stop scouring over our pebbles, accept the foundations are unfit for purpose and bring out the bulldozers instead. Huge thanks to @1boring_ym for bringing it to my attention. Certainly in my piece above I’ve equated this kind of scrutiny with progress, and I think this piece highlights the risk that default assumptions about “more research is needed” may in fact stifle progress by encouraging us to follow unhelpful avenues of work or preventing us from drawing solid conclusions.