Statistics

New Paper: Parameter-Centric Analysis Grossly Exaggerates Certainty

By moi as well. Paper link. Abstract:

The reason probability models are used is to characterize uncertainty in observables. Typically, certainty in the parameters of fitted models based on their parametric posterior distributions is much greater than the predictive uncertainty of new (unknown) observables. Consequently, when model results are reported, uncertainty in the observable should be reported and not uncertainty in the parameters of these models. If someone mistook the uncertainty in parameters for uncertainty in the observable itself, a large mistake would be made. This mistake is exceedingly common, and almost exclusive in some fields. Reported here are some possible measures of the over-certainty mistake made when parametric uncertainty is swapped with observable uncertainty.

This peer-reviewed (and therefore indisputably true in all its arguments) wonder will appear (April, I was told) in the Springer collection data Science for Financial Econometrics.

Now this data set has been used innumerable times to illustrate regression techniques, but I believe it is the first time it has been demonstrated how truly awful regression is here.

In each panel, the predictive posterior is given in black, and the parameter posterior is given in dashed red. In order to highlight the comparisons, the parameter posterior was shifted to the peak of the predictive posterior distributions. The parameter posterior is of course fixed—and at the “effect” size for speed. Here it has a mean of 3.9, with credible interval of (3.1, 4.8).

It is immediately clear just reporting the parameter posterior implies vastly more certainty than the predictive posteriors. We do not have just one predictive posterior, but one for every possible level of speed. Hence we also have varying levels of over-certainty. The ratio of predictive to parameter credible intervals was, 40.1 (at 1 mph), 37.8 (10 mph), 37.6 (20 mph), and 40.1 (30 mph),

The over-certainty is immense at any speed. But what is even more interesting is the enormous probability leakage at low speeds. Here we have most probability for predictive stopping distances of less than 0, a physical impossibility. The background information B did not account for this impossibility, and merely said, as most do say, that a regression is a fine approximation. It is not. It stinks for low speeds.

But this astonishingly failure of the model would have gone forever unnoticed had the model not been cast in its predictive form. The parameter-centric analysis would be have missed, and almost always does miss, this glaring error.

This, like last week, is the uncorrected submission. I haven’t seen the page proofs yet. This all typos free of charge!

I’ll also put up the code for all this and make a class post at a later date. We already did the cow part: See this post.

IMPORTANT: The models in this paper are when bad purposely bad. They are meant to show what goes wrong in the usual modeling process.

Categories: Statistics

4 replies »

  1. When we see authors make the mistake of swapping uncertainties, all we can do is tell them to cut it out.

    Heh. Not bomb them from orbit?

  2. “parameteric”
    “begining”
    “lenth”
    “impies”
    “arrangemnets”
    “yeilds”
    “the the”
    “enropy”

    Justin

Leave a Reply

Your email address will not be published. Required fields are marked *