Pretty much everything has been tried. Bumps on the head, a.k.a. phrenology, bumps in the head, a.k.a. magnetic resonance imagery, galvanic skin response, heart rate, breathing rate, stool color, sweating, nervousness, answers on proprietary questionnaires (they call them “instruments” and charge pretty pennies to view them), torture, e.g. water-boarding (yes, it’s still torture even if you undergo it for the purpose of writing about it). Even asking, “Are you lying?”
I’m guessing about the stool color.
The newest foray into fiction forecasting is eye blinking. So says Frank M. Marchak in his peer-reviewed “Detecting false intent using eye blink measures” in the aptly named journal Frontiers in Psychology.
Marchak assures that “[s]ince being untruthful regarding both past and future acts includes the attribute of a desire to mislead”. That settled, Marchak collected via on- and off-line ads 54 Montanans as representatives of the entire human race.
In two extremely complicated experiments “of ecological validity”, he tracked his volunteers’ pupil diameter, blink, and eye movement using the “Smart Eye Pro version 5.4 remote eye tracker.” Results?
In both experiments…those with false intent showed a lower blink count difference, fewer numbers of blinks, and shorter maximum blink duration for questions related to their intent compared to questions related to another act for which they had no intent.
That’s a tangle, which only grows thicker when considering the experiment itself, which went like this (I’ll just do the first). People who saw the ad called a number, which was hooked to an answering machine which told volunteers to leave their own number. They were called back later, promised $25, and asked to venture to an “intake office” in some downtown building. Paperwork was filled out. And then—this is beginning to sound like a spy novel—participants were handed a slip of paper which directed them to walk to a second building in which was an “instruction room.”
Some of the volunteers were then instructed to don headphones which informed them that “they were to commit a mock crime by taking a ‘fuse lighter’ from a downstairs office in the building and providing it to a ‘contact’ after completing a credibility assessment test at another location”. They were given a photograph of the “contact.”
The remaining volunteers “heard instructions in which they were to remove a note from the door of a downstairs office and were not provided with the supplementary materials.”
In both conditions, participants exited the instruction room, walked around the block, and entered the building through a side door. They then proceeded downstairs to a basement office. Those in the truthful intent condition simply removed a sticker containing numbers from the door. Those in the false intent condition were required to enter the office and find and remove the fuse lighter. The office containing the fuse-lighter was furnished to resemble a working facility.
In both conditions, the participants exited the building through a third door and proceeded approximately 2 blocks to our laboratory to take a credibility assessment examination.
Then it got strange. Participants were debriefed with a fixed set of questions, the timing of which was painstakingly measured. For example, the “neutral” question “Do you live in Bozeman or a surrounding community?” took precisely 3005 milliseconds to hear. Not 3004 milliseconds, not 3006 milliseconds, but 3005 milliseconds because, it is presume, a one millisecond difference could make all the difference.
Besides the neutral queries, there were also out-of-the-blue questions about drugs (e.g. “Do you intend to transport illegal drugs today?” 2610 ms) and some about the bomb (e.g. “Do you plan to provide a fuse lighter to someone today?” 3060 ms).
During this grilling, the eye-scanner counted the blinks, durations of blinks, and time between blinks, and maximum blink time, both before, during, and 10 seconds after each question.
Then it got strange. The statistical manipulation of these numbers was so complex it would have put Merlin to shame. But skip that and consider the “raw” data. Two groups: “false” (fuse bomb) and “truthful” (note only) intent. For example, the average (standard deviation) maximum blink duration for neutral questions were 221.700 ms (136.474 ms) in the bomb group and 231.398 ms (123.879 ms) in the note group. The “8” in “231.398” represents a millionth of a second. There were similar results for drug and “explosives intent” questions.
Same sort of thing for number of blinks. Note group had average 6.368 blinks (SD 4.438), and bomb group had 4.467 blinks (SD 2.693) for the explosives intent queries, and similarly fewer average blinks for the other query types. What’s a thousandth of a blink? Never mind.
You can guessed what came next: frequentist hypothesis testing (after much manipulation) and wee p-values, all of which “proved” Marchak’s theory to Marchak’s satisfaction.
I have no idea what to make of this study, but I did learn this (I’ll let Marchak have the last word):
The effect of arousal on eye blink behavior has been investigated by Tanaka (1999) who examined the changes in blink rate, amplitude, and duration as a function of arousal level and found differences between a high arousal vigilance task and a low arousal counting task.
I believe I learned of this study from Neuroskeptic @Neuro_Skeptic, but I can’t recall for sure.