Beer: alcohol, calories, and carbs

Boxplots of calories by beer style

Which beer style has the most calories? In general: porters. The least: lager, the style of beer with which you are probably most familiar. Budweiser, Miller, Coors, the majority of all mass-market beers are all brewed in the lager style.

These box-plots use data from the web site RealBeer.com. The editors of that site keep a running list of brewers, beers and the alcohol, calories, and carbohydrate content of, at this writing, 229 different beers from 72 different breweries. There are, naturally, many more beers and breweries than this around the world; this data reflects the beers of most interest to readers and users of RealBeer.com. The classification into styles of beer is my attempt, and any mistakes in classification are my own. You should visit RealBeer.com to learn more about beer styles. The RealBeer.com data set is most complete with alcohol values, but there is far less information about calories and carbs, owing to the greater difficulty of obtaining or measuring those values.

Here’s a quick lesson on how to read box plots: the dark, center line is the median, the point at which 50% of the values are above, 50% below. The next two horizontal lines are the quartiles: the top one is the 3rd quartile, which means 25% of the values are above it; the next is the 1st quartile, which means 25% of the values are below it. The top and bottom lines are the 5% and 95%-tiles, with the obvious interpretation. Points beyond these are more extreme values. Box-plots are intended to give you an idea of the spread, variability, and distribution of data.

But the main lesson is: if you are counting calories (and don’t insist on taste), lager beers are your choice. Lager and ales also have the widest ranges of calories, but this may reflect the fact that most of the data are from these two main groups. 44% of the beers listed are ales, 38% lagers, 4% porters, 8% stouts, and 6% wheats. There was also one barley wine, a style noted for its high alcohol content, which I classified into an ale since it is difficult to do statistics with just one data point.

How about alcohol content?

Never use bar charts! Case study #1

Social secuity disability applications

This graphic comes from the New York Times article “Social Security Disability Cases Last Longer as Backlog Rises.” It obviously intends to show how applications have increased since 1998.

This is a terrible plot.

The reason is not that you should never, with only rare exceptions, use a bar chart. They are simple to construct, but there are nearly always better alternatives.

But the evil of bar charts is well known. The reason this plot is bad has to do with the number 0. Notice that the chart starts at 0, even though it isn’t until 2 million or so that we meet our first number. The only reason that the chart starts with 0 is that it is true that you can’t have less than 0 applications. This is not a good reason. They should have started with a higher number.

Don’t think it makes a difference? Then take a look at this re-drawing:

How to Exaggerate Your Results: Case study #2

That’s a fairly typical ad, which is now running on TV, and which is also on Glad’s web site. Looks like a clear majority would rather buy Glad’s fine trash bag than some other, lesser, bag. Right?

Not exactly.

So what is the probability that a “consumer” would prefer a Glad bag? You’ll be forgiven if you said 70%. That is exactly what the advertiser wants you to think. But it is wrong, wrong, wrong. Why? Let’s parse the ad used and see how you can learn to cheat from it.

The first notable comment is “over the other leading brand.” This heavily implies, but of course does not absolutely prove, that Glad commissioned a market research firm to survey “consumers” about what trash bag they preferred. The best way to do this is to ask people, “What trash bag do you prefer?”

But evidently, this is not what happened here. Here, the “consumer” was given a dichotomy, “Would you rather have Glad? Or this other particular brand?” Here, we have no idea what that other brand was, nor what was meant by “leading brand.” Do you suppose it’s possible that the advertiser gave in to temptation and chose, for his comparison bag, a truly crappy one? One that, in his opinion, is obviously inferior to Glad (but maybe cheaper)? It certainly is possible.

So we already suspect that the 70% guess is off. But we’re not finished yet.

Why most statistics don’t mean what you think they do: Part II.

In Part I of this post, we started with a typical problem: which of two advertising campaigns was “better” in terms of generating more sales. Campaigns A and B were each tested for 20 days, during which time sales data was collected. The mean sales during Campaign A was $421 and the mean sales during Campaign B was $440.

Campaign B looks better on this evidence, doesn’t it? But suppose instead of 20 days, we only ran the campaigns one day each, and that the sales for A was just $421 and that for B was $440. B is still better, but our intuition tells us that the evidence isn’t as strong because the difference might be due to something other than differences in the ad campaigns themselves. One day’s worth of data just isn’t enough to convince us that B is truly better. But is 20 days enough?

Maybe. How can we tell? This is the part that Statistics plays. And it turns out that this is no easy problem. But please stay with me, because failing to understand how to properly answer this question leads to the most common mistake made in statistics. If you routinely use statistical models to make decisions like this—“Which campaign should I go with?”, “Which drug is better?”, “Which product do customers really prefer?”—you’re probably making this mistake too.

In Part I, we started by assuming that the (observable) sales data could be described by probability models. A probability model gives the chance that the data can take any value. For example, we could calculate the probability that the sales in Campaign A was greater than $500. We usually write this using math symbols like this:

Pr(Sales in Campaign A > $500 | e)

Most of that formula should make sense to you, except for the right-hand side of it. The bar at the end, the “|”, is the “given” bar. It means that whatever appears to the right of it is accepted as true. The “e” is whatever evidence we might have, or think is true. We can ignore that part for the moment, because what we really want to know is

Pr(Sales in B > Sales in A | data collected)

But that turns out to be a question that is impossible to answer using classical statistics!

Why most statistics don’t mean what you think they do: Part I.

Here’s a common, classical statistics problem. Uncle Ted’s chain of Kill ’em and Grill ’em Venison Burgers tested two ad campaigns, A and B, and measured the sales of sausage sandwiches for 20 days under both campaigns. This was done, and it was found that mean(A) = 421, and mean(B) = 440. The question is: are the campaigns different?

In Part II of this post, I will ask the following, which is not a trick question: what is the probability that mean(A) < mean(B)? The answer will surprise you. But for right now, I merely want to characterize the sales of sausages under Campaigns A and B. Rule #1 is always look at your data! So we start with some simple plots:

Box plot and density plot of the sales of campaigns A and B

I will explain box and density plots elsewhere; but for short: these pictures show the range and variability of the actual observed sales for the 20 days of the ad campaigns. Both plots show the range and frequency of the sales, but show it in different ways. Even if you don’t understand these plots well, you can see that the sales under the two campaigns was different. Let’s concentrate on Campaign A.

This is where it starts to get hard, because we first need to understand that, in statistics, data is described by probability distributions, which are mathematical formulas that characterize pictures like those above. The most common probability distribution is the normal, the familiar bell-shaped curve.

The classical way to begin is to then assume that the sales, in A (and B too), follow a normal distribution. The plots give us some evidence that this assumption is not terrible—the data is sort of bell-shaped—but not perfectly so. But this slight deviation from the assumptions is not the problem, yet.