Veja Du and “routine” data collection in mental health services

When giving out questionnaires in trials, there’s a sadly common comment that often gets made – the participant, having completed the lengthy and deeply personal checklists of symptoms, will say “well, if you weren’t depressed before you did them, you certainly would be afterwards.”

This very issue prompted a really fascinating twitter discussion a week or so ago, as mental health service users questioned the value of receiving questionnaires like this when they’re in treatment, for example whether they find them of value in monitoring their own progress through therapy, or whether in fact they can make people feel worse.

 

People on the other side of services – including me – suggested that this probably wasn’t the main aim of those questionnaires anyway. They’re there to help monitor the performance of therapists and services, to collect data nationally on what impact services like IAPT have for communities and populations – rather than on individuals per se. This proved to be extremely controversial for some service users, who pointed out that they had never had this explained to them, let alone been asked for consent to take part in this kind of national monitoring scheme.

Questionnaires like this, used in IAPT or clinical psychology services, apparently come under ‘routinely collected data’ in the health service, and so approval isn’t required to access it. It seemed less clear there on what approval was needed to collect that data in the first place. There does seem to be a crucial difference here, in that other forms of routinely collected data don’t rely on the individual themselves providing this information. As @Sectioned_ commented, answering difficult and sensitive questions feels far from ‘routine’.

The issue of ‘performance monitoring’ also perhaps inevitably led on to the problems of ‘target driven’ services in the NHS. Some with experience working in IAPT services talked about services potentially gaming the system by deliberately turning away people they thought had a poor chance of recovery, as this would damage their scores (for a discussion of this, see this post by Andy Fugard). This is a particularly excruciating example of how mental health is the poor cousin of physical health services. Imagine a physical health service that deliberately turned away the people who needed it most – it would be front page news and a national scandal (1).

Some of the debate was centered around the question of whether population-level benefit (identifying failing services, improving national service provision) is ever justified when it comes at individual cost (for the person filling in the forms). Some service users themselves were happy to contribute for a broader purpose, but it was emphasised that this needs to be their choice. This rests on that choice being presented, and the rationale for the data collection explained. Unfortunately, from the experiences being discussed, it seems this doesn’t always happen.

What I personally found enlightening though was how this discussion with individual service users exposes essential questions about the population benefit or strategic value of such questionnaires. The quality of the data collected, with examples of people answering how they thought they ‘should’ in order to ‘qualify’ for treatment, or just giving the same answers every week as they didn’t want to think about it too much – and even faking progress because they felt bad for their therapist (!), suggest that problems for the individual on the ground have a direct impact on the bigger picture. The arguments in favour of collecting and using this data rest on the assumption that the data collected is accurate (so if someone makes progress, this would be genuinely reflected, and is comparable across different service users) and valid (that it measures ‘getting better’ or ‘getting worse’ in a real way). The problems raised by those completing these measures suggest the data isn’t accurate – as people fake their responses or don’t engage with the questions – and also raised questions about its validity, as the symptom checklists weren’t considered relevant or meaningful (2).

I do recognise that for some service users this was really pretty irrelevant – they quite understandably didn’t give a monkey’s about data quality when they felt their rights and needs as service users were being neglected in favour of what was experienced as a concealed national monitoring programme. I guess my message here is more for researchers or professionals who use this type of data themselves: It’s a great example of ‘veja du’, or looking at something familiar with fresh eyes. In this case, looking through the eyes of service users themselves raises a huge number of important questions, about how we collect this data, the ethical questions it raises, the assumptions we make about what the data tells us and warnings about how the questionnaires could be misused. Routine data is routine to us – but looking through service user eyes jolts us out of our typical assumptions, and hopefully away from “that’s just how it is” to asking “how should it be?” and most importantly “who is in the best place, or the most important place, to tell us this?”

Huge thanks to those who contributed to the twitter discussion: @Sectioned_ , @BipolarBlogger, @suzypuss, @maddoggiejo, @inductivestep, @ScottMHC14, @JaneStreetPPAD and more. As I said that day, I learn more and am challenged more on twitter than at any academic conference I’ve been to. @PublicInvolve nailed the real benefit though 😉

 Notes

 1. Another example was how services impose an almost arbitrary limit on the number of sessions of therapy service users get, so that the service can see more people and keep the waiting times down.

2. There’s a fascinating question here of how we could make such measures meaningful. Service users asked why they couldn’t choose their own goals or questions and track their progress on those. This would certainly fit with the policy drive to personalise care and make it meaningful to the people receiving therapy. You could still evaluate the services overall, but the scale would be how much progress has been made in meeting each users individualised goal plan.

3.On the note of how important it is to consider these issues through service users eyes, I would recommend reading this by @Sectioned_, which makes it painfully clear that those of us bemoaning the ‘service level’ problems that cuts to mental health care are causing are probably experiencing only a paper cuts’ worth of the pain they cause to the service users who depend on them (and quite likely those struggling in the front line to provide those services too.)

Advertisements
This entry was posted in Mental Health and tagged , , , , . Bookmark the permalink.

3 Responses to Veja Du and “routine” data collection in mental health services

  1. Pingback: A matter of routine? | Sectioned

  2. Andy says:

    One of the main uses of the questionnaires absolutely is to work out whether therapy is working for individual service users as therapy is still in progress; see, e.g.:

    Bickman, L., Kelley, S. D., Breda, C., De Andrade, A. R., & Riemer, M. (2011). Effects of routine feedback to clinicians on mental health outcomes of youths: results of a randomized trial. Psychiatric Services, 62, 1423–9.

    Knaup, C., Koesters, M., Schoefer, D., Becker, T., & Puschner, B. (2009). Effect of feedback of treatment outcome in specialist mental healthcare: meta-analysis. The British Journal of Psychiatry, 195(1), 15–22.

    Lambert, M. J., & Shimokawa, K. (2011). Collecting client feedback. Psychotherapy, 48(1), 72–9.

    Cooper, M., & Wilson, J. (2013). Systematic feedback: a relational perspective. Therapy Today, (December), 30–32.

  3. Sarah there is a massive problem with questionnaires is that too many can be a negative factor. We found in the LABILE trial that on planning the baseline assessments when I was used (I offered) as a guinea pig to time all the baseline questionnaires it took 3 hours +, I was thinking It a well- imagine someone with very unstable BPD having to endure all these! So we cut it right back and only included the most essential after much discussion with our whole team. WE had to get a certain amount of data out for HTA. The other thing is that limiting therapy sessions is management driven not clinician driven and not offering what a patient needs is wrong especially when they fill out CORE etc and then the reward s you get a ration and no more or if you very lucky you can progress to Step 3/4 but the waiting list is an unknown entity. Data collection is important in research especially in full power recruitment RCTs. It can tell us a lot esp with qualitative data. However yes more goal setting by service users is a good place to start. After all if you have CBT in complex care you get more sessions and can do your own goal setting.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s