Another day, another dreary study purporting to show that the brains of “conservatives” are different than those of “liberals.”
This one hooked up to an electrical phrenology device (fMRI) 83 people1 and had them look at disgusting pictures (still shots from The View?) and other sorts of pictures and then rate them “using a nine-point Likert scale”. I’ve asked this before, but on a scale of -2 to 52.7, how good are these faux numerical scales at quantifying things like disgust or pleasantness? Never mind.
The peer-reviewed paper is by Read Montague and a slew of others in Current Biology, and has the same name sans question mark as today’s post.
To discover “conservatives”, “liberals”, and “moderates” questions were asked about how strongly participants supported items like “Biblical truth” (do no liberals believe this?) and “Foreign aide”. These were scored, the scores separated, and the results assumed infallible. Yes, really. There is no indication—which is to say, no indication—the uncertainty from these arbitrary questions arbitrarily scored and arbitrarily busted up was carried through in any analyses. But since everybody makes this mistake, we shouldn’t question it.
Anyway, the main result is no result. The three “groups did not significantly differ in subjective ratings of disgusting, threatening, or pleasant pictures”. Also turned out that “there were no significant group differences on [other] self-report measures”.
End of story? No, sir. Scientists do not let the absence of wee p-values discourage them. Out came the “penalized regression method called the elastic net” applied to the fMRI data. The theory was that even though there were no real differences in behavior, maybe the brains were different after all, which is a strange thing to think given there were no real differences in behavior. I hope my repeating that isn’t annoying.
Is this a good point to remind us the fMRI data are not pictures of the brain but are themselves output of models and heuristics (“Functional data were first spike-corrected to reduce the impact of artifacts using AFNI’s 3dDespike”, etc., etc.) which themselves are subject to uncertainty which should be carried forward in any analysis but which usually aren’t, and weren’t here? If not, let me know when is.
I hesitate to describe the authors did next not because it’s difficult, but because I don’t think anybody will believe it. I will first remind us that we are to again lament that most statistical practice is designed around model fit, which tell the world how closely a model fits to the data at hand, and that the more models tried the better success of discovering one which fits.
The authors showed each person sets of neutral (whatever the hell that is), pleasant, threatening, and disgusting photos. There weren’t any reported differences in fMRI manipulated data between people seeing these images in the three different groups.
Next up was to form “contrasts”, which was to sort of difference the fMRI manipulated data from times when people looked at disgusting, threatening, and pleasant images against so-called neutral images. These same differences were applied to averages between “conservative” and “liberals.” The “moderates”, sad folks, were thereafter forgotten.
Incidentally, the types of people in the “conservative” and “liberal” groups were not the same: “liberals” averaged 33 years old, 39% female; “conservatives” 27 years old, 61% female. Might these biological differences account for differences in fMRI manipulated data? The authors admit (in supplementary material) that “religiousness”, age, and sex “were significantly correlated with political attitudes”. But they put this down to “false alarms” and carried on.
Now came generalized linear models—we still haven’t reached the elastic net—where for each individual “a temporal high-pass filter (128s) and order 1 temporal autocorrelation (AR(1)) was assumed”. And “The onsets for each picture subcondition (core/contamination disgust, animal reminder disgust, actual threat, no actual threat, social pleasure, nonsocial pleasure) and fixation crosses were convolved with a canonical hemodynamic response function…using a delta function of zero duration”, etc.
And that wasn’t all. “Six head motion parameters were also included in the first level GLM as covariates.” So were age and sex. Uh oh. Then they “separately examined the maps of [Disgusting – Neutral], [Threatening – Neutral], and [Pleasant – Neutral] contrasts”. Then some t-tests and some other things.
Result? “The contrasts with threatening or pleasant pictures revealed no regions surviving multiple corrections. However, in the [Disgusting > Neutral] contrast, the Conservative group showed greater activity than the Liberal group in several regions” (hint: amygdala! amygdala!). Yet, sadly, “No regions survived correction for multiple comparisons for the
[Liberal group > Conservative group] comparison.”
Another no result. So back to the computer and the “penalized logistic regression analysis”, a.k.a. “elastic net”.
“First, we extracted a map of the [Disgust > Neutral] contrast for each participant. Then, we applied an a priori mask, which was generated from the Neurosynth website”. Then they “obtained the union of meta-analytic (positively correlated and both forward and reverse inference) maps of ‘Emotion’ and ‘Attention'” and then finally formed up all the voxels into a matrix and submitted all to the “elastic net.”
That creature is so cumbrous I don’t dare describe it. But it was, in the end, fit to the “individual scores on a standard political ideology assay” and, mirabile dictu, the model fit was reasonable. But only for those time disgusting images were viewed (and leaving out “moderates”). Would young females dislike disgusting images more than older males? Just asking.
The true test: How well does their model predict political attitudes for people not used to fit the model? [INSERT CRICKET CHIRPS HERE]
The authors conclude “Neuroscience has started to provide rich information about the neurophysiological processes underlying political behavior.” No, it hasn’t. It is true that a spate of flawed papers are appearing, each borrowing the mistakes of the other. Yet the authors don’t even blush when the say “Our results have important implications for the links between biology, emotions, political ideology, and human nature more fundamentally.”
Here’s where it gets scary, folks. They suggest “people are born with certain dispositions and traits that influence the formation of their political beliefs”. This seems trivially true; after all, some of us are men and some women, and that difference means a lot. But the differences the authors means refer to flawed ad hoc idiotically scaled questionnaires. How long until some bright academic produces “the” list of questions which separate the sheep from goats?
Next: “A wide range of brain regions contributed to the prediction of political ideology (Figure 3A), including those known from past work to be involved in the processing and interoception of disgust and other stimuli with negative affective valence, but also those involved in more basic aspects of attentive sensory processing”.
The mistake here is to assume we are our brains, slaves to them somehow, that these curious organs can make us do what they like, and that we have little to say about it. The lack of philosophical training tells again.
Nowhere do these authors (or any other that I have seen) betray any lack of confidence in their convoluted analyses. It seems as if—I’m just guessing—that all these authors think that because their analyses are complex they are therefore right. We need a name for this fallacy.
1In supplementary material the authors say 12 people were removed from the analysis, but it’s not clear if these were before or after the 83.
Thanks to Rexx Shelton, Robert, and one anonymous reader for suggesting this topic.