Polygraph machines are perhaps the most well-known technology that have labelled themselves as ‘lie detectors’, able to read whether people are hiding the truth. They’re terribly popular, at least among people who don’t concern themselves too much with whether they actually tell the truth (looking at you, ‘scandalous’ daytime TV shows.) The problem with polygraphs as a lie detecting machine is well established though, and beautifully illustrated here by Vaughan Bell who recounts the case of Buzz Fay, who was wrongly imprisoned for two years based on a polygraph, studied the machine, and then managed to teach other offenders how to beat it.
Polygraphs work by comparing your level of arousal, as measured by blood pressure or pulse, when asked certain questions compared to a baseline established when you answer questions known to be true. This is why polygraph scenes on’t telly involve the first few questions being obvious things like “Are you sat in a chair?”, “Is your name Detective MacBreakstherulesbutdammithegetsresults?” and so on. This establishes a baseline of your resting level of arousal, and the polygraph administrator then looks for spikes of arousal when asked questions like “Did you sleep with the Chief’s Daughter and never call her back?” or “Are you Keyser Soze?”. But if you force yourself to get stressed during the baseline – say, by imagining watching Jeremy Kyle – then your baseline level will be inflated, and the machine won’t see any ‘spikes’ when you react to other questions. The idea that polygraph machines are lie detectors is therefore misleading – they’re just a machine that takes measures, and open to manipulation from those who understand how that machine works.
Implicit attitude tasks are another tool that has been occasionally heralded as a lie detector. These tasks involve showing you different pictures or words in a category (say “women” or “men”) on computer and asking you to hit a particular key when you see one of them, then also asking you to hit a particular key when you see a “positive” or “negative” term. The researchers then look at your reaction times, and check if, for example, you were quicker when “female” and “positive” were on the same key, which is taken as showing an implicit positive bias toward women. The idea is that subtle but measurable differences in your reaction times will show what you really think.
There was much talk of the potential of such tasks to work as lie detectors, with studies for example showing that people who would self-report no racist bias would then be shown to “implicitly” prefer white people. Over here at Project Implicit you can check your own implicit biases toward race, gender and much more, which sounds like ounces of fun. I even recall these tasks being suggested as a way of identifying paedophiles, if you measured their reactions to pictures of children and found they “implicitly” preferred suggestive images of minors. The problem is that, just like the lie detector, the test is pretty easy to fool. If you just react very slowly to every single picture, then this makes it harder to detect those subtle speed differences that occur when you’re trying to react as fast as you can.
This brings us to the inspiration for this post, an article in The Conversation about how a combined set up of penis-arousal-measury-thingies (ok, apparently it’s ‘penile plethsymography’ or PPG),and virtual reality technologies can help identify sexual offenders. It works by tracking both how long they look at virtual reality images of minors and also their physiological arousal to those images. The article says:
[The researchers] used headsets that track eye movements and record how long participants spent gazing at images. They also measured participants sexual arousal through penile plethsymography (PPG), which measures the flow of blood to the penis.
Combining PPG and virtual reality to gauge the behaviour of sexual offenders in the past has been criticised because of the possibility that they game the system by simply not looking at the images. The eye-tracking capability of the headset overcomes this problem, recording not just which computer-generated images of adults or children the participants view, but over which areas of the body their gaze lingers.
The article itself focuses on various interesting issues around the virtual reality aspect, like immersion, but for me this classic argument of getting past deception to the truth is the most interesting part – and as with the cases above I’m not convinced this could do it. I’d need to see evidence that people who are instructed to, and who then really try to avoid looking at images that they find arousing, can’t play the system in some way. I can imagine this being through a combination of the lie-detector technique (just think of arousing things the whole time and there won’t be any clear differences between stimuli) and the implicit task dodge (look at everybody the same amount of time and again screw up the comparison data.) Maybe the headset/ PPG makes this harder to do, but I’d question whether it makes it impossible. It’s also of course one of those cases when you can’t see how it would work in practice. It would presumably not be ethical or legal to just try it on anyone – especially with the invasive aspects, you’d need good reason to assume the person has these attractions, but if you have good reason anyway then what does the headset add? Does it tell us something above and beyond, or more reliably, than the other forms of evidence we might use?
It strikes me that “Hey, we can see what people REALLY think!” is a pretty standard go-to for any devices or procedures that want to show they have special power or insight. I guess because the idea of reading minds is so appealing (to some. I personally think it would be hellish…) and it seems an easy route to demonstrating value – This Machine Reads Minds. We can spot liars, expose racists and catch sexual offenders. In reality, I think these techniques rarely achieve these feats, and people continue to be more complicated and harder to read than any tool can tell us.
Perhaps more interesting to me in all this is what it says about our mixed up relationship with the issue of body and mind. I think lots of people still see body and mind as completely separate. Certainly this would go some way to explaining why “physical” and “mental” health get treated so differently, or why some people think you can separate illness into “real” or “psychological”. I wonder if the above ideas fit into this in some way:, the appeal of somehow reading those pure signals from the body like heart rate, reaction times, eye movements, the ‘tells’ that will reveal what the mind is really up to. We believe that what the body physically tells us must be ‘real’ or true, not like that chatter from the mouth full of the mind’s distortions and lies. Consequently, the idea of machines that can pick up on these bodily confessions, and save us from the perils of just talking to people, have huge, but perhaps undeserved, appeal.