The tale of Eliza is a bit of a standard one in cognitive psychology lectures. Eliza is a text based programme that you can ‘talk’ to, and one which is modelled as a kind of therapist. An early example of how natural language processing could work, her creator, Joseph Weizenbaum, chose to make her emulate Rogerian therapy essentially because Rogerian therapists are non-directive, focused on mirroring what their client says, and this style was something that an NLP bot could try to replicate. The programme didn’t need any additional knowledge or protocols – it could just respond to the text entered by the user themselves. The story goes that, to Weizenbaum’s horror, some of his lab staff actually started to believe “she” was a real therapist, and indeed kicked off somewhat when they realised all their innermost thoughts and desires were being recorded on a log on Weizenbaum’s computer.
I’d love to know more about this – did the users really think for example that the programme was a person (Weizenbaum seemed to think so, and dedicated much of his later scholarly work to admonishing those who anthropomorphised technologies). Or did they just find some benefit in getting their thoughts down into text, and having a programme that helped spur them to work through them?
Eliza gets referred to quite often as the first “computer therapist”, and I think the attitudes of Weizenbaum Vs The Staff still sum up quite well the different reactions to whether computers should be involved in therapy at all. Is this a terrible technological encroachment on something fundamentally human, a soulless ‘trick’ with programming that would be a bad joke gone too far if used for ‘real’ therapeutic purposes, or is it something valuable, something engaging, something that could genuinely help?
My feeling is that the answer is somewhere in the middle. I agree with Weizenbaum that trying to create a genuine ‘computer therapist’ is a daft idea. I actually did some training in Rogerian therapy while I was a PhD student, and it is an incredibly human and personal thing – it’s about having another living, breathing person there with you, to help you reflect on your own words. A patient in a study I did about computerised therapy coined the phrase “the verbal cuddle” – the emotional, physical embrace of actually speaking to another human being. You can’t replicate that.
That I think is the key point – replication doesn’t work. But perhaps what technology can do is offer something different. Patients in the study I mentioned who really, really liked computerised therapy weren’t those who thought the computer programme did a stand up job of imitating a real therapist, they were those who liked what it did differently. The fact that they could access therapy at home, at any time of the day. The fact that they didn’t need to talk to someone – that the therapy was private, and that it was something they could work through at their own pace. Maybe that’s what Weizenbaum’s staff liked as well. In the stories I can find, it doesn’t say they freaked out about the computer not being a real person – they freaked out when creepy Mr W said he’d read their private text.
This brings us – finally – to R2-D2. This fab post at the Medium tackles the age old problem of why R2-D2 is awesome but C-3PO is kind of a pain. The author nails this down to the very problem discussed above – that the best technologies are those that do things we can’t, whilst those that try to replicate or replace things we already do aren’t as valuable and can be just annoying, especially if they try to emulate human characteristics or interactions (Microsoft Word’s hideous Clippy being cited as an example. NO I AM NOT WRITING A FREAKING LETTER CLIPPY GO AWAY). I think this is very true in mental health technology – attempts to make the programme talk ‘like a therapist’ or giving it a fake therapist just seems to rub people up the wrong way. It emphasises what is artificial, rather than making it seem more personal.
This paragraph sums it up best:
“R2-D2 excels in areas where humans are deficient: deep computation, endurance in extreme conditions, and selfless consciousness. R2-D2 is a computer that compensates for human deficiencies — it shines where humans fail…
R2-D2 aspires to be a great computer.
C-3PO aspires to be a mediocre human.
We need great computers, not mediocre humans.”
Mental health technology I think is a perfect example of this. Some online CBT programmes, like MoodGYM, seem to try to emulate aspects of ‘real’ therapy – even down to insisting you do a one hour session each week, like you would with a ‘proper’ therapist. This seems like nonsense, if we embrace the idea that we should exploit what computers do that is different. They don’t have a waiting list of patients and an office you need to attend. They can be ever present, or present on the move, or only activated when you need them.
Making programmes that try to emulate human therapeutic interactions only serves to highlight what’s missing – that you aren’t sharing this experience with a real person. But using what technology can do differently to think about how we could deliver therapy differently – that could work. That could get us away from Eliza, and mean our ‘computer therapists’ start to look, and act, more like R2-D2 than C-3po. They might be more valuable that way – and I’m sure they’d be a lot less annoying.
You can play with an online version of Eliza here. The slightly creepy thing is that because the bot just churns out instant answers based on your text, it responds instantly, and the answer appears at the exact same time that you see your own response on screen. It’s reminds me of the ‘Midnight’ episode of Doctor Who, or those bald guys in Fringe…