Today’s headlines mostly got it wrong:
- The New York Sun said “Study Shatters Myth That Boys Are Better At Math.”
- The New York Post said “Girls = boys in math skills.”
- The New York Daily News said “Math gender differences erased.”
- The New York Times said “Math Scores Show No Gap for Girls, Study Finds.”
Only Keith Winstein at the Wall Street Journal got it right:
This is, of course, a political topic. This is evidenced by the Times beginning their take on the story by recalling the fate of Larry Summers, ex-president of Harvard, who dared to publicly wonder whether males and females have similar mathematical ability. In case you don’t recall, he surmised that they did not, and he was crucified for uttering such politically-incorrect heresy.
Janet Hyde, who is a professor at the University of Wisconsin, Madison, and who led the study, said the idea that boys might be better at math is a “stereotype.” Well, let’s see.
Hyde’s study, which is wholly statistical, is typical. And none of the headlines, save the WSJ, correctly describe what Hyde actually did. To explain it, I have to get a bit technical, but stay with me because this is very important.
Hyde fit a probability model to her data and then made an indirect statement about the value of that models’ parameters. What does this mean? She first assumed that the approximate uncertainty in math scores could be modeled by a normal distributions. Normal distributions have two parameters which must be specified. The first is usually (and mistakenly) called the “mean” and it describes where the peak or center of the normal distribution lies. The second is usually (and mistakenly) called the “variance” and it describes the spread of the distribution: larger variances mean that the data is more uncertain.
A statistical test is then run, asking “Are the mean parameters for boys and girls equal or unequal?” If the mean for the boys is larger than the mean for the girls, the implication is that boys are better at math than are girls. If the means are roughly equal, then people conclude—sometimes incorrectly—that the performance of boys and girls are “the same.”
It is important to emphasize that the study as reported in most newspapers only said something about the mean parameters for the boys and girls. These parameters were roughly equal, and this implied, all other things being equal, that boys’ and girls’ ability is equal.
But all things are not equal.
What all the news reports, except the WSJ, forgot was the variance. The following picture will help explain what I mean.
The top picture shows the normal distributions of what might be normalized math test scores for girls and boys: scores greater than 0 are better than average, scores less than 0 are worse than average (these data are just an illustration; I don’t have Hyde’s study data, but the point is the same). The girls are the solid line, the boys are the dashed. You can see that both have a peak in exactly the same place. This implies that the mean performance for both boys and girls is the same, that is, on average, their performance is the same.
But notice that the boy’s line is a little—only just a tiny—bit more spread out than the girls’. This is because the variance for the boys is larger than for the girls, but just a little larger. Can this make any difference to the performance on math tests? Yes, a huge difference.
The lower-left picture is just like the larger picture, but it blows up the area of high test scores (those greater than 3.5). The dashed line (the boys) is everywhere on top of the solid line (the girls), which means it is more likely for boys to outscore the girls at the highest levels of the test.
The picture on the lower-right shows how much more likely. For example, for test scores of 5 or higher, boys are over 9 times more likely to do better than the girls! This is not to say that there will not be any girls at the very top: there will be.
What this all means is that you will see many more boys than girls at the very top of the test scores. But it also means that you will see many more boys than girls at the very bottom of the test scores! We could draw a similar picture to the lower-right which shows those who do very badly at the math tests: boys outnumber girls here, too.
As the WSJ said “Girls and boys have roughly the same average scores on state math tests, but boys more often excelled or failed”. This is all because, for every grade and in every state, the mean of the boys and the girls is the same, but the boys are always more variable.
Now, if this difference—for it is a difference—persists at the college and post-graduate level, and if math professors are chosen by their ability, than males will outnumber females. Which is exactly what is found at actual colleges and universities.
Why the difference in variance exists is unknown, but it is again a political question. We could surmise, with Mr Summers, that the difference is due to innate tendencies, but to admit that is to admit that, at the top, men are better than women. But this also admits that, at the bottom, men are worse than women. The difference might be due to education: teachers could be singling out the best—and worst—boys and then treat them differently than the best and worst girls. But this is unlikely at the college level, and does not account for post-graduate performance either (number and quality of papers published, etc.).
It is more plausible that males and females are different in their abilities. Just don’t say this very loudly, or you will get yourself into some serious trouble, like Mr Summers, who, as the philosopher David Stove often said, “quickly rediscovered the definition of the word sacred“.