I spend a lot of time on twitter. I’m sure I have colleagues who think I spend altogether too much time on twitter. I don’t feel guilty about it though, because I learn a lot, from both following developments and conversations (“lurking” in twitter terminology) and from interactions with other tweeps, both academic and otherwise (see for example 4 Things I Learned On Twitter). There is another benefit as well, because as a mental health researcher interested in how technology can support people with mental health problems, twitter is a living, thriving, cat-picture-posting example of that very thing in action.
People with mental health problems use twitter in many different ways: to express their views, to share experiences, to interact with professional organisations such as the Royal College of Psychiatrists or charities such as Mind, to have tweet chats about mental health, it’s impact, what it is and what it means, to share jokes about the rubbish portrayal of mental health in the media, to keep up with and respond to the latest news items about mental health, to promote mental health initiatives and tackle stigma, and finally to make friends, find support and to talk openly about their own illnesses.
It’s the last part that I assume was the motivation for the new Samaritans Radar app. The app claims to help turn twitter into a ‘safety net’ for people who might be in crisis, by checking their tweets for certain key phrases and then alerting that person’s followers that something might be wrong. The press release states:
Samaritans, the leading suicide prevention charity, today is launching Samaritans Radar – a free web application that monitors your friends’ Tweets, alerting you if it spots anyone who may be struggling to cope. The app gives users a second chance to see potentially worrying Tweets, which might have otherwise been missed.
Created by digital agency Jam using Twitter’s API, Samaritans Radar uses a specially designed algorithm that looks for specific keywords and phrases within a Tweet. It then sends an email alert to the user with a link to the Tweet it has detected, and offers guidance on the best way of reaching out and providing support.
Samaritans recognises that social media is increasingly being used as an outlet for people to share their feelings. In addition, there are some who may go online in the hope that someone will reach out and offer support.
Looking at that summary, a couple of potential problems leap out. Firstly, people on twitter can have many followers who aren’t ‘friends’ (and may actively be enemies, as anyone struggling with trolls will know.) There’s also the mention of mining tweets automatically to look for content, which can be disconcerting for a lot of people, and raises privacy concerns. Finally, there’s the slightly dubious comment about people going online “in the hope that someone will reach out”, which seems to imply that people are using twitter as a kind of ‘cry for help’, which contrasts with the active use of twitter that I’ve seen (and also if this was something people did, if their followers aren’t offering to support them anyway then I’m not sure the app would make much difference.)
The biggest problem however only became apparent when I clicked the link to see what signing up to Radar involved. I’d assumed that it would be a case that I choose to add the Radar app to my feed, and consequently any one who follows me would be alerted to these ‘key phrases’ in my tweets. In fact, the page says the app will “flag potentially worrying tweets that you may have missed, giving you the option to reach out to those who may need your support.” Ie. If I add the app to my account, it monitors my followers, rather than allowing my followers to monitor me.
It’s exactly this point which has caused the most anger and dismay from twitter users themselves. If you check the #samaritansradar feed, you will find lots of comments about invasion of privacy, data protection, fears about abuse of the app (for example, if one of the trolls mentioned earlier gets alerted their victim is feeling particularly down and decides to take advantage) and finally concerns that the app, monitoring followers without their consent, will actually backfire and discourage people from tweeting about their problems.
As someone who is, on the whole, optimistic about the use of technology to support mental health, this is a very worrying development. Specifically, I worry that the Samaritans have walked into a classic trap in health technology, of designing something to benefit service users without really considering how the users would feel about it. The website says that the app did go through pilot testing with user groups, but if so then it’s strange that this very obvious problem of confidentiality wasn’t picked up. The press release mentions that the app is particularly targeted at the 18-35 group and one of the groups consulted was Young Minds, so one possible reason is that perhaps younger twitter users didn’t have these concerns? I’d love to see the evaluation feedback itself.
I can imagine that some people think the furore over ‘confidentiality’ is misguided given that twitter is a public platform. Anyone can see your tweets, and (if your account is unlocked) anyone can send you a reply. I think this misses something crucial though, about how people actually use twitter themselves. When I’ve talked to people who enjoy using technologies to support their mental health, they really seem to value the control they have over it, and the feeling of empowerment that they are choosing what to do and when. If someone chooses to say on twitter “I feel like giving up”, then this is a choice to express themselves. They can also choose how to respond to other people who reply. The idea that an automatic algorithm will assess that tweet, and then send a specific message to their followers about it encouraging a certain response, is in opposition to this sense of ownership and control. The fact that this happens without the person ever having opted in to this service – it is their followers who add the app to their account – compounds this sense of invasion and of choice being taken away. Choice and control seem to be two of the key benefits identified by users of mental health technology, so an app which removes those benefits, however well intentioned, is going to face resistance, and also kind of misses the point. Mark Brown makes a similar point in his typically excellent post on the topic:
“Suggesting that if users want greater privacy from the app they should mark their tweets as “private” misses the point of how most people use Twitter. Making tweets private will prevent them from being indexed, removes for many the ability to reach out to others, interact via hashtags conversations, and interact via the medium of Twitter. These are some of the primary attractions of the platform over closed reciprocal networks like Facebook”
People like twitter because it is public and encourages communication and new contact. This doesn’t mean however that we should make assumptions about what they are trying to communicate (is it a cry for help?) and intervene on their behalf to decide who should be specifically contacted about it, and what action they should take. We especially shouldn’t decide that it’s those other followers, not the person themselves, who choose whether this service is activated or not.
Even if you put aside the potential abuse of this function by less-than-nice followers, I can also imagine how even well-meaning tweeps could still cause problems. I’ve seen many times on twitter the irritation caused by followers tweeting uninvited “advice” (“Have you been to the gym?? Exercise ALWAYS helps!” or “you would feel better if you switch to an all organic vegan diet”). It can also create a sense that the person should be grateful for platitudes sent from people they don’t know.
I really believe the social media can contribute in wonderful ways to supporting people with mental health problems. I also believe though that we will get the best sense of how it does this from following those users themselves and understanding what they want and need from the service (or having ‘technology desire paths’ as I”ve described them). The Samaritans have now released a statement saying they have extended the ‘whitelist’ function for hiding tweets from organisations to individuals, so they do seem to be responding to the twitter feedback (I’m not 100% sure though if this means you do have to activate the app in order to add yourself to a whitelist?).
That statement also says “At the heart of Samaritans philosophy is the belief that ordinary people listening to the problems and feelings of one another can make a big difference to people struggling to cope. People often tell the world how they feel on social media and we believe the true benefit of talking through your problems is only achieved when someone who cares is listening.” I wonder if this hints at the core of the problem here. The Samaritans have obviously seen that people on twitter talk about and express their feelings. However, they then add in their own view that ‘knowing someone is listening’ is a crucial part of this, and also they’ve created their own definition of what that listening is (having your followers alerted to specific phrases in your tweets.) This is perhaps one of those cases where they’ve designed a service by anticipating what users want or need, rather than asking them directly – determining the path, rather than following the people who walk it.
(NB. The views expressed in this post, as with any on this blog, are mine alone and not representative of my employer or funder.)
Update 5/11/14: A couple of really great blogs on this issue you might be interested in – Victoria Betton here links it to the concept of “context collapse” on social media which is fascinating. This post by Michelle Brook looks at how the process of designing for users is vital and very complex, with suggestions for how Samaritans may have missed some important considerations. Lots to learn in both posts for others interested in eHealth and Digital Health as well as anyone concerned about the app itself.