Unleash the polls! No, I don’t mean the men who bravely served under Grand Duke of Lithuania Władysław II Jagiełło (free bad joke of the day!), but those election omens which nowadays plague news reports. It is well to understand how these things work.
If a poll is to be used to predict the outcome of a vote—and not, say, shoring up the hopes of a beleaguered constituency; See Times, New York, polls—then there are two important points to remember:
- The sample of the poll must “look like” the eventual voters. This is true for any statistical model, not just polls. Models only apply to new data that “looks like” the data used to build the model.
- A poll is always accurate for the “kind of” sample it represents, just as any statistical model (excepting mistaken calculations) is always valid for the “kind of” sample that was used to form the model.
The first one you probably knew, though we still have to define “looks like”, but the second one you might not have. About these more in a moment.
Size of poll
The “size” of a poll is also of interest, but not of much interest: almost all polling agencies gather a sufficiently large sample (but beware those with numbers less than about 400). The size of the poll is what gives those “+/- 4 points” (or whatever) which appear in fine print and which are routinely ignored. These numbers are always wrong; that is, they do not mean what you think they do. They are, however, an argument for using predictive rather than classical statistics.
The “+/- 4” means that in infinite repetitions of the poll, 95% of the infinite repetitions will produce numbers +/- 4 points of the original poll. But since not even the federal government has time for infinite repetitions, it would better to just perform the (Bayesian) calculation and state the actual uncertainty.
In practice, if you don’t understand any of that gobbledygook, this means adding a point or two to the stated plus-or-minus. Thus a “+/- 4 point” becomes realistically a “+/- 5 point” or “+/- 6 point” uncertainty, and so forth. You must always do this. This is the uncertainty assuming the sample “looks like” the eventual voters. If the sample does not look like the eventual voters, then you must increase the plus-or-minus.
So what does “look like” mean? Well, lack of bias, for one thing. Let’s not consider what we can label “NPR bias,” which creeps in with questions like, “Are you against the death of innocent children?” where a “Yes” means support for a tax increase to create new bureaucracy which tangentially involves studying children’s eating habits, and where it will be reported that “78% of Americans are in favor of the job the government is doing.” For non-NPR listeners, this is usually easy to spot and discount.
The bias I mean is how far a poll systematically departs from what the eventual voters look like. To understand that, we first have to examine “random” samples.
First, forget all the nonsense you hear about a poll having to be a “random” sample. Random merely means unknown and no pollster worth his (hefty) fee samples “unknownly.” “Random” sampling is another holdover from the classical days of statistics, when people still believed that creating a “random” sample imbued it with mystical powers without which it could not be modeled.
What you really want is known sampling, controlled sampling, purposeful sampling. This is why pollsters make a point to sample both men and women, blacks and whites, Catholics and Protestants, why they take individual samples within States and within localities inside States, and why no pollster just “randomly” samples citizens.
“Random” dialing, even after the pollster slices the data into chunks, does not provide any benefit. Removing bias in the dialing does; about this more in a moment. For a fuller explanation which explains the magical thinking involved in “random” sample, see this article; and then this one.
Our own poll
You and I are going to conduct a poll (and ignore the burdensome +/-). Since this a blog of Right and Reason, of Morality and Manliness, of Science and Sanity, you agree with me that Romney is the only choice. Very well, that’s 2 for Romney, 0 for Obama. 100% for Romney, then. Somebody call the press.
Now, this is a poll. It is no better or no worse than any other poll—as long as we keep in mind point #2 above: that all polls are valid representations of kind of people sampled. This poll is thus an accurate judgment of people who think and will vote like you and me.
But since you and I don’t “look like” the people who will turn out next week, this poll won’t be very good at predicting the results of the general vote. So what precisely does “look like” mean?
Every person has a near infinitude of characteristics: he or she has a sex, an age, height, weight, lives in a particular place, has read some books but not others, watches certain television programs but not others, works at a job or collects government largesse, drinks or abstains, prays or preys, and on and on and on some more.
Because the number of characteristics is immensely large, no sample can ever look like the eventual poll census in every particular. (This is also why we don’t need “randomness”, because random sampling cannot guarantee equal dispersement of characteristics; only control can.) But a sample can look like its population if we only consider a subset of characteristics.
For example, eventual voters are usually split about equally between males and females. We could easily design a sample which (non-randomly) includes an equal number of men and women. That sample then “looks like” the population. At least as far as sex goes.
But what about “likely voters”, what about race, what about resident and age and religiosity, etc.? Which of the infinitude of characteristics are important and which not? The answer, which you will not like, is that nobody knows. Or nobody knows exactly. We do have a guideline, though.
A characteristic is important to the extent it changes the judgment of uncertainty in how a person will vote. Make sense?
Suppose you are blind and somebody sets you down in Cincinnati and you ask the first person you grab (not seeing whether this is a man or woman). What is the probability that person will vote for Romney? Given no other information1, except assuming this is an eligible voter, you can only conclude 50%—unless you think living in Cincinnati confers probative information.
Now suppose you learn the person is a registered Democrat. What is your new judgment of the probability this person will vote for Romney? Lower. Knowing the characteristic party affiliation has changed your judgment of uncertainty, and by a lot. Party affiliation, then, is extremely important.
Next suppose you learn this person chewed Bazooka Joe and eschewed Juicy Fruit as a kid. Does that change your judgment of uncertainty in whether this person will vote for Romney? Not really, no. Gum preference is therefore unimportant.
And so on across an endless list. Pollsters have moderate to good guesses which characteristics are important and which not, because they have found these characteristics to be important in past polls. Whether they have identified all of them or whether these characteristics will remain important are open questions—with answers leaning towards No.
Regardless, pollsters take characteristics which they deem important—and no two pollsters agree on their lists—and then they seek a controlled sample based on them.
We agree party affiliation is important for voting, but it is also probative for “turn out.” That is, knowing a person’s party affiliation changes our judgment about whether he will show up to vote. Historical observation showed that in 2008, Democrats out-showed Republicans by a sizable margin at most locations. This was not so in the 2010 mid-term elections, where the disparity vanished or even favored Republicans.
The 2008 disparity is one reason why you see discrepancies in sampling of today’s polls. Pollsters are guessing more Democrats than Republicans will show (and by certain margins). If they are right, then they should angle their samples in the direction of more Democrats, because they want their sample to “look like” eventual voters.
Problem is if they guess wrong, or that if people who are Democrats say they will vote but will not more often than Republicans, then their sample will not look like eventual voters.
This “over sampling” of Democrats angers many people. And that may be because of their unstated, but felt, appreciation that polls are, to some extent, influential forecasts. They have a gut suspicion that if purposely biased polls are repeatedly released, polls which favors the legacy media’s darling, then this may depress turn-out for the party of the Right. And if the candidate of the not-Right party wins, the polls which predicted his victory will thus seem “good.”
There is some truth in this, but the effect is likely small, especially at the national level.
What is a good poll?
A good poll is one which matches the eventual vote breakdown. A bad poll is one which does not. In advance of the actual election, there are only two ways to judge goodness and badness.
The first is how well the pollster has done in previous presidential elections. Since not many pollsters have polled many presidential elections, simply because we have had very few of these elections, past performance is thin evidence. Not useless, just of little value.
To the extent you feel a pollster’s performance on non-presidential elections matches his performance on presidential elections, then there is more evidence from which to draw, from the many Congressional elections, for example.
The second is how well you think the pollster has done in making his sample look like the eventual voter turn out. This is hardly quantifiable (which is not a detriment; there is far too much unnecessary quantification in our world). If you think a pollster is loony for releasing a D+9 poll in Ohio, then obviously you will give that pollster much less weight.
Polls are not probabilities
Lastly, polls are guesses of what the vote breakdown will be, and are not probabilities of winners and losers. Some polls now have Romney at 48% and Obama at 48%. This does not mean that Romney or Obama has a 48% chance of winning. It means this polls guesses the actual vote will be 48% for Romeny, 48% for Obama (plus or minus something).
To get to a probability, we take this poll (and other poll results) and other information (such as GDP, unemployment, etc.) we deem probative and put it into a model, which gives us prediction of who will win. Nate Silver has done this and derived a 74.6% (or whatever) chance for Obama. But see this article on Silver’s “lucky guessing.”2
1This is a difficult point for some. To clarify: “no other” means “no other.”
2Silver was also good at marketing himself, telling the world he used “Monte Carlo” simulations for his model, a term which is unbearably sexy to some. As an unknown statistician I say this in all jealousy.