Why Science Is Broken: Hillsdale Speech Video & Transcript Now Online

Why Science Is Broken: Hillsdale Speech Video & Transcript Now Online


Here it is:

And here is Greg’s talk:


I followed this closely during the speech, but did not adhere to it perfectly. I don’t have a transcript of Greg’s talk.

A fascinating experiment was conducted not too long ago. An experiment about experiments. About how scientists came to conclusions in their own experiments.

What happened was this. Nate Breznau and others handed out identical data to a large group of researchers and asked each group to answer the same question. The question was this: would immigration reduce or increase “public support for government provision of social policies”?

That can be difficult to remember, so let’s reframe this question in a way more memorable, and more widely applicable to our other examples. Does X affect Y? Does X, more immigration, affect Y, public support for certain policies?

That’s causal language, isn’t it? X affects Y? Words about cause, about what causes what. Cause, and knowledge of cause, is of paramount importance in science. So much so I claim, and I hope to defend, the idea that the goal of science is to discover the cause of measurable things. We’ll get back to that later.

Just over 1,200 models were handed in by researchers, all to answer whether X affected Y. I cannot stress enough that each researcher was given identical data and asked to solve the same question.

Breznau required each scientist to answer the question with a No, Yes, or Cannot Tell. Only one group of researchers said they could not tell. Every other group produced a definite answer.

About one quarter—a number we should all remember—one quarter of the models answered Yes, that X affected Y—negatively. That is, more X, less Y.

Now researchers were also allowed to give some idea of the strength of the relationship, along with whether or not the relationship existed. And that one-quarter who said the relationship between X and Y was negative ranged anywhere from a strongly negative, to something weaker, but still “significant.” Significant. That word we’ll also come back to.

You can see it coming. About another quarter of the models said Yes, X affects Y, but that the relation was positive! More X, more Y, not less!

Again, the strength was anywhere from very strong to weak, but still “significant”.

The remaining half or so of the models couldn’t quite bring themselves to say No: they all still gave a tentative Yes, but said the relationship was not “significant”.

You see the problem. There is in Reality only one right answer, and only one strength of association, if it exists. That a relationship does not exist may even be the right answer. I don’t know what the right answer is, but I do know only one can be. Yet the answers—the very confident, scientifically derived, expert investigated answers—were all over the place and in wild disagreement with each other.

Every one of the models was science. We are told we cannot deny science. We are commanded to Follow The Science.

But whose science?


Now these models were from the so-called soft sciences: sociology, psychology, education and the like. It’s not surprising there are frequent errors from these fields because of the immense and hideous complexity of their subject.

Which is why we often turn to the so-called hard subjects, like physics and chemistry, for “real science.” These are fields in which the subjects under study are more amendable to control, and hence easier to examine. But, this, too, is often an illusion.

Physicist Sabine Hossenfelder in a Guardian article calls attention to a peculiar phenomenon in physics, the hardest of hard sciences.

Since the 1980s, [says Hossenfelder,] physicists have invented an entire particle zoo, whose inhabitants carry names like preons, sfermions, dyons, magnetic monopoles, simps, wimps, wimpzillas, axions, flaxions, erebons, accelerons, cornucopions, giant magnons, maximons, macros, wisps, fips, branons, skyrmions, chameleons, cuscutons, planckons and sterile neutrinos, to mention just a few.

None of these turned out to be real. Yet more are proposed constantly. She blames, in part, Popper’s idea of falsificationism, which says that propositions are scientific if they are falsifiable. Any proposition which can be falsified is scientific. It follows that any proposition about anything that is measurable, from Bigfoot to gender theory to the existence of new particles, is scientific. So let’s do science by proposing lots of falsifiable propositions!

This over-broadness was an early, even fatal, criticism to the philosophy of falsificationism. Another, even more damning, critique is that you almost never can persuade scientists to cease loving their actually falsified theories—theories which don’t match Reality—especially when those theories are popular or lucrative. Planck offered a superior philosophy: Science, he said, advances one funeral at a time. Still, few have had success in talking working scientists out of falsificationism. That is a talk for another time.


Now another thing to emphasize in Breznau’s experiment was the hugeous pile of models turned in. Over 1,200. Twelve hundred. That’s a lot of models!

With that many, it must be true that making models is easy. Creating theories is simple. The researchers broke no sweat in producing this cache. And neither did the physicists who proposed all those new particles.

In a very real sense, science, doing science, is too easy. Making models is too easy. Calling X a cause of Y is too easy.

And our examples, Breznau and particle physics, are only two small instances. Think about what this means extrapolated to every branch and field of science, the whole world over.

People have thought about it: Enter the replication or reproducibility crisis.


Major replications of what are considered the best papers, from the top journals like Nature and Science, have been attempted by several groups over the last decade or so. These were large and serious efforts to attempt to duplicate original experiments in the social sciences, psychology, marketing, economics, medicine and others.

What is stunning is that the results from these efforts were the same: only about half the replications worked, and half did not. And of the half that worked, only half of those—one quarter: that number we had to memorize—were of the same strength of effect size.

Lets look at medicine.

John Ioannidis, a name familiar to some of you, examined the créme de la créme of papers, which is to say, the most popular papers, the ones with over 1,000 citations each.

Scientists count their citations like influencers count their “likes.” Scientists with their h-indexes, impact factors, source normalized impacts per paper and all the rest, and the way they eagerly share and scrutinize these “metrics”, can be said to have invented social media.

Anyway, Ioannidis examined forty nine top papers. Here’s what he found: “…7 (16%) were contradicted by subsequent studies, 7 others (16%) had found effects that were stronger than those of subsequent studies, 20 (44%) were replicated, and 11 (24%) remained largely unchallenged.”

Only a quarter of papers. Twenty five percent. Doesn’t that sound like Breznau’s experiment?

The British Medical Journal 2017 review of New & Improved cancer drugs found that for only about 35% of new drugs was there an important effect, and that “The magnitude of the benefit on overall survival ranged from 1.0 to 5.8 months.” That’s it. An average of three months.

Richard Horton, editor of The Lancet, in 2015 announced that half of science is wrong. He said: “The case against science is straightforward: much of the scientific literature, perhaps half, may simply be untrue. Afflicted by studies with small sample sizes, tiny effects, invalid exploratory analyses, and flagrant conflicts of interest, together with an obsession for pursuing fashionable trends of dubious importance, science has taken a turn towards darkness.”

The half of science that is wrong is, I emphasize, the best science. Consider how bad it must be in the lower tiers.


You might have heard of recent work by Russell Funk and others. They noticed that the production of what they call “disruptive science” has plummeted since 1950. By this they meant genuinely new (and not just “novel”) and foundational work. It has all but stopped, and in all fields.

Is this because science has already made most discoveries, and we’re now in a wrap-up phase? Or is it because of a deeper problem?

In any case, there is no possibly, at all, that all the papers produced by science today are correct, and even those that are correct seem to be of less and less real use.


All right, we have learned that something like three-quarters, or even more, of science is wrong or badly over-certain. And, of course, some is true science, but even this is increasingly of less value.

There is no symmetry here. Even if half of science is true, the half that is wrong takes more time and resources to handle or counter, because the bureaucracy manages science, and our rulers are free to pick and choose “The Science” they like.

Did you ever notice they always say “The Science” and not plain “science”?

Now the number of published papers has grown from about a quarter million a year in 1960 to about 8 million now, a number still heading north. Because most of it is wrong, and because of the harms of bad science, we’re forced to conclude there is too much science. There are too many scientists, there is too much money and too many resources being spent on science.

The solution to this glut is easy. In principle. Stop doing so much science! Alas, there is little hope we’ll see any calls for less science education or lowered spending.

Let’s instead explore why it’s so easy to produce bad science, and what counts as bad science.

Some of these reasons are easy to see. Like peer review. Because scientists really must publish or perish, they are to large degree at the mercy of their peers, who act as gatekeepers to journals.

Richard Smith, former Editor of BMJ, in 2015 said, “If peer review was a drug it would never get on the market because we have lots of evidence of its adverse effects and don’t have evidence of its benefit. It’s time to slaughter the sacred cow.” Again, alas, it won’t be.

Peer review added to the surfeit of papers results in a system that guarantees banality, penalizes departures from consensuses, limits innovation, and drains time—almost as much as writing grants does. For not only must you publish or perish, you must provide overhead for your dean.

These and activities like fraud, which because of increasing money and prestige of science is growing, are all of known negative effect. So let’s instead think about deeper problems. Philosophical problems.


Finally we come to the philosophy of science, ostensibly this talk’s title. Unfortunately, we could not start with that subject because of the universal awe in which science is held. I had to at least attempt to show that this awe is not always justified. Now I hope to show that philosophy has something to do with this.

What is the nature or goal of science? I claimed earlier it is to understand the causes of observable things. Why and how and when X causes Y. Many, or even most scientists do not disagree with that, though some do. The agreement depends on which philosophy of nature one espouses, and which philosophy of uncertainty, and of what models and theories are. And here there is much dispute.

Some, calling themselves instrumentalists, are satisfied with statements like “If X, then Y.” This is similar to “X causes Y”, but not the same. If X, then Y merely says that if we know X, then Y will follow in some way. It doesn’t say why, or say why entirely.

Instrumentalism can be useful. Consider a passenger in a jet. She has no idea how the engine and wings work together to cause the plane to fly. But she sees, and trusts, that the plane will fly. If X, then Y.

This happens in science, too, like when experimenters try varying conditions just to see what happens. The inventor of the triode vacuum tube, called an “audion”, by Lee de Forest, had no idea how it worked. Nobody did, at first, and there were even many wrong guesses, but that didn’t stop RCA and others from using this obviously superior device in early radios.

But instrumentalism is never completely satisfying, is it? Just knowing If X, then Y? If you plug the audion into a certain circuit, a louder signal emerges. Isn’t it far superior proving that the grid, when similarly charged as the cathode, impedes electron flow to the plate, and when oppositely charged the flow increases, hence the triode amplifies the signal on the grid? X causes Y.

So cause is our goal in science, or should be. But that doesn’t mean it’s easy. There are many ways for this goal to be missed—or mistaken.

At last, here are some (but not all) of the ways science goes wrong in its fundamental task of discovering why and how and when X causes Y. I’ll go from easiest to understand to hardest to explain.


1. X is not measured, but a proxy for X is, and everybody forgets the proxy.

This one is extraordinarily popular in epidemiology. So much so that without it, the field would almost barren. This error is so common, and so fruitful at producing bad science, that I call it the epidemiologist fallacy, which combines the ecological fallacy—mistaking the proxy for X as X—with mistaking correlation for causation.

PM2.5—dust of a certain size—is all the rage, and is investigated for all its supposed deleterious effects. There are a slew of papers saying PM2.5 is “linked to” or “associated with” heart disease or some such thing.

Problem is, actual intake of PM2.5 is never measured, only rough proxies of “exposure” are given.

Such as zip codes used to determined one’s recorded primary residence and its distance from a highway, and then a model of how much PM2.5 is produced by that highway, and how much PM2.5 is thus available at your house, where it is assumed that availability is your exposure. And that exposure if your intake. Get it?

Understand that the error is not falsely claiming PM2.5 causes heart disease. It may, it may not. The mistake is over-certainty. Vast over-certainty. There are too many steps in the causal claim to know what is going on.

I can’t resist telling you my all-time favorite instance of the fallacy. Some from Harvard’s Kennedy School claimed X causes Y, that attending a Fourth of July parade turns kids into Republicans.

Parade attendance was never measured.

Instead, they measured rainfall at the location on people’s listed residences when they were children. If it rained, they assumed no parades took place, and so no kid went to one, even if that kid was at a parade at grandma’s house. If it didn’t rain, they assumed every kid did attend, even if they were away for camp.

They used causal language: “experiencing Fourth of July in childhood increases the likelihood that people identify with and vote for the Republican party as adults.”

Thus San Francisco, which rarely sees rain in July, should be a hotbed of Republicanism.

2. Y is not measured, but a proxy for Y is, and everybody forgets the proxy.

Sometimes neither X nor Y are measured, but everybody acts like both were. This becomes the double-epidemiologist fallacy. You find this in sociology a lot. And in experiments allowing “multiple endpoints” in medicine. The outcome might be the multiple endpoint, “AIDS, or pancreatic cancer, or heart failure, or hangnails”, and so if we hear a claim of some new drug that lessened the endpoint, we are not sure what is being claimed.

The CDC is a big user of this fallacy. This was how they talked themselves into mask mandates—in spite of a century’s worth of studies showing masks did not work in stopping the spread of respiratory viruses.

During the covid panic, one of their “major” studies looked at “cases”—by which they meant infections—in counties with out without mandates; or, rather, they looked at changes in rates of infections. But to tell masks stop respiratory bugs from spreading, one must measure the use of a mask and the subsequent infection or lack of it. If X, then Y. From which we might arrive at X causes Y. Measure odd things like county-level changes in rates of “cases” with and without mandates does not tell you this. Neither X nor Y has been measured. Cause remains vague to extreme degree.

Incidentally, one study did it right. In Denmark, researchers taught one group how to use the best masks properly, and gave them a bunch of free ones, and another group went mask free. They measured individual infections afterwards. No difference in the groups. Anyway, if masks work, masks would have worked.

3. Attempting to quantify the unquantifiable.

Thomas Berger’s novel Little Big Man (eschew the movie) tells the tale of Jack Crabb, a white boy adopted into and raised by a Cheyenne clan around 1850. Years later, Crabb finds himself back among the whites, and is amazed at all the quantification. “That’s the kind of thing you find out when you go back to civilization: what date it is and time of day, how many mile from Fort Leavenworth and how much the sutlers is getting for tobacco there, how many beers Flanagan drunk and how many times Hoffmann did it with a harlot. Numbers, numbers, I had forgot how important they was.”

Too important.

Let me ask you, right now, how happy you are. You in the audience now. On a scale from minus 17.5 to e—the natural number e—cubed. I could have asked on a scale from 1 to 5, maybe, which allows me to scientifically put my happiness score on a Likert scale, the scientific name given to assigning whole numbers to questions.

Let’s be serious, and do real science, and call my measure the Briggs instrument. Questionnaires are called instruments when they are quantified, the language an attempt to borrow the rigor and precision of real instruments like oscilloscopes or calipers.

Suppose I polled the left half of the room, and then the right half, and there were differences in happy scores. Would I then be able to say, sitting on the left half of lecture halls causes less happiness in after-dinner speech listeners? I should be: that’s how science is done.

It’s not that the patented Briggs instrument isn’t telling us nothing about happiness. Take two people, one who answered the highest and one the lowest. There is probably a real difference in happiness between these two people. It’s that we’re not quite sure what this real difference is.

What does happy mean? Moby Thesaurus says: “accepting, accidental, ad rem, adapted, addled, advantageous, advisable, applicable, apposite, appropriate, apropos, apt, at ease, auspicious, beaming, beatific, beatified, becoming, beery, befitting, bemused, beneficial, benign, benignant, besotted, blessed, blind drunk, blissful, blithe, blithesome, bright, bright and sunny, capering, casual, cheerful,” and on and on and on.

Each of these gives a different genuine shade of happy. How do we know those answering the patented Briggs instrument mean the same shades?

The typical response is to claim our instrument has been validated. And this means, roughly, that it was given to more than one group of people and that the answers came out about the same. That’s not true validation—which isn’t possible.

4. Mistaking correlation for causation.

Every working scientist knows the adage: correlation doesn’t imply causation. Sadly, just like confirmation bias, that’s for the other guy. Most cannot resist the temptation to say my correlation is my causation.

Why? The practice of announcing measure of model or theory fit as proving cause.

The Lancet’s Horton, whom we met earlier, also said, “Our love of ‘significance’ pollutes the literature with many a statistical fairy-tale”. This “significance” is a word with a definition bearing no relation to the normal English word. It means having a wee p-value, a bit of math with which there are so many things wrong we could take an hour detailing them.

So we’ll leave it at this: significance, i.e. a wee p-value, is when a model fits a set of data well. It is taken, often, to mean cause has been found. This is always a fallacy. Cause may exist, but it can never be demonstrated by “significance”. It is always a fallacy because this significance is only a measure of correlation. And we all agreed correlation does not imply causation.

It is only the laziest of researchers who cannot find “significance” in some way for his dataset. For there are an infinity of models available to choose from. Correlation can always be had. The number is not an exaggeration. The number of possible models is potentially infinite. At least one can always be found for any set of data to exhibit “significance.” Which just means, remember, that the model fits the data well, that correlation exists.

There are endless examples to choose from. Endless. My favorite is the evils of third-hand smoke. You have heard of second-hand smoke, that smoke and whatnot that comes out of smokers which somehow affects non-smokers.

Third-hand smoke isn’t smoke at all, but the byproducts of smoking that come off of smokers and leave a trace, long after smokers are gone, where unwitting non-smokers may stumble across them.

A team of researchers went into a theater where smokers once were, and at which non-smokers attended later showings absent any smokers. They concluded, because of significance, that sitting in the chairs smokers once sat was like sucking in the “equivalent of 1 to 10 cigarettes of secondhand smoke.” Which is about the same number of cigarettes heavy smokers go through during a movie.

The result is absurd.

But believed. According to one report, “The effects were particularly pronounced during R-rated films, like ‘Resident Evil,’ which the authors suggested was because such movies attract older audiences more likely to have been exposed to smoke.”

Significance is also why there exist conflicting headlines like, “One egg a day ‘LOWERS your risk of type 2 diabetes'” and “Eating just one egg a day increases your risk of diabetes by 60 percent, study warns.” I have a collection of these things: science says just about everything will both kill and cure you.

It’s not only bad statistics. Those physicists inventing that particle zoo also measured success by how well their models fit anomalous data. That’s why they made the models, to fit those anomalies.

Model fit is a necessary but far, far from sufficient criterion of model goodness. Models can always be made to fit. Not all can be made to represent Reality. This is why I stress no model that has not been independently tested against Reality can be trusted. Most models are not so tested. It depends on the field, but in some areas, usually the so-called softer sciences, models are never independently checked.

5. Multiplication of uncertainties.

We all agree that the planet needs saving. Everybody says so. From global cooling.

When climatology was becoming a new field, they really did say a new ice age was coming.

Newsweek in 1975 reported, “There are ominous signs that the earth’s weather patterns have begun to change dramatically and that these changes may portend a drastic decline in food production”.

Time in 1974 said, “Climatologist Kenneth Hare, a former president of the Royal Meteorological Society, believes that the continuing drought…gave the world a grim premonition of what might happen. Warns Hare: ‘I don’t believe the world’s present population is sustainable if [trends continue].'”

There are scores upon scores of these, the scientists and groups like the UN warning of mass deaths by starvation and so on.

Well, climatological science grew, and the temperature warmed, and then we got global warming. Caused, incidentally, by the same thing said to cause global cooling: oil.

Global warming in time became “climate change”, a brilliant name, because the earth’s climate changes unceasingly. Thus any change, which is inevitable, can be said to be because of “climate change.” Correlation becomes causation with ease here.

“Climate change” was quickly married to scientism, where it came to be synonymous with “solutions” to “climate change”. Because of this error, doubt expressed about the so-called solutions caused one to be called a “climate change denier”—an asinine name, because no working scientist, not one, denies the earth’s climate changes or is unaffected by man.

Janet Yellen recently said that “Climate change is an existential threat” and that the “world will become uninhabitable” if—you know the rest—if we don’t act.

Uninhabitable is a mighty word. Rode and Fischbeck in 2021 examined environmental apocalyptic predictions and discovered that the average time until The End, for those saying we “Must act now”, as Yellen did, is about nine years.

Predictions of only nine years left started in gradually in the 1970s. They now happen regularly.

Funny thing about these forecasts is that failure never counts against theory. Which is another strike against falsification.

That is a story unto itself. Let’s instead peek at the science of “climate change.” Not at the thermodynamics or fluid physics, which is too much for us here, but at the things which are claimed will go bad because of “climate change.”

Which is everything. There is no ill that will not be exacerbated by “climate change”, and there is no good thing that will escape degradation. “Climate change” will simultaneously cause every beast and bug and weed which is a menace to flourish, and it will corrupt or kill every furry, delicious, and photogenic animal.

There is a fellow in the UK who collects these things. His “warm list” total right now is about 900 science papers, an undercount. Academics have proved, to their satisfaction, that “climate change” will cause or exacerbate (just reading the first few): “AIDS, Afghan poppies destroyed, African holocaust, aged deaths, poppies more potent, Africa devastated, Africa in conflict, African aid threatened, aggressive weeds, Air France crash, air pockets, air pressure changes, airport farewells virtual, airport malaria, Agulhas current, Alaskan towns slowly destroyed, Al Qaeda and Taliban Being Helped, allergy increase, allergy season longer, alligators in the Thames”. And we haven’t even come close to getting out of the As.

There is not one study, that I know of, that remarks on how a slight increase in globally average temperature will lead to more warm, pleasant summer afternoons.

That a small change in the earth’s climate, caused by man or not, can only be seen as wholly and entirely bad, and can be in no way be good, is sufficient proof, I think, that science has gone horribly wrong. It’s not logically impossible, of course, but it cannot be believed.

Yet this doesn’t say how these beliefs are generated. They happen by some of the reasons we’ve already mentioned, but also by forgetting the multiplication of uncertainties.

Given knowledge of coins, the chance of a head on a flip is one half. Two heads in a row is one quarter: the uncertainties are multiplied. Three in a row is one eighth; four is one in sixteen. If the event of interest is that string of four heads, we must announce the small probability of about 6%.

It would be an obvious error, and silly mathematical blunder, to say the probability is “one half” because the chance of the last head is one half. And it would be outrageous if a headline were to blare “Earth will see a Head on last throw.” Agreed?

That’s exactly how “climate change” scare stories are produced.

We first have a model of climate change, and how man might affect the climate. There is only a chance this model is correct. It is not certain.

We next have a weather model, which rides on top the climate model, which says how the weather will change when the climate does. This model is not certain, either.

We then have a third model in how some item of importance, the welfare of some animal or size of coffee production or whatever, is affected by the weather. This third model is not certain.

We finally, or eventually, have a fourth model which shows how a solution will stop this bad thing from happening. This model is also uncertain.

In the end, it will be announced “We must do X to stop Y”. This is equivalent to “Earth will see a Head.” Causal language. Which we agreed was an error.

The chain of uncertainties must be multiplied. The greater the chain, the more uncertain the whole must be. This is never remembered. But must be, especially when the number of claims grows almost without bound.

6. Scientism.

Pascal commented on “The vanity of the sciences. Physical science will not console me for the ignorance of morality in the time of affliction. But the science of ethics will always console me for the ignorance of the physical sciences.”

Scientism is the mistaken belief that science has all the answers, that all things should be done in the name of, or justified by, science. Yet science cannot tell right from wrong, good from bad.

I wish we had time to thoroughly dissect scientism. Its effects are vast and devastating. I’ll mention only the gateway drug to serious scientism, which I call Scientism of the First Kind.

This is when knowledge which is obvious or has been known since the farthest reaches of history is announced as “proved” by science. This encourages belief in the stronger, darker forms of scientism.

Examples? A group researched whether laptops were distracting to students in college classrooms. The Army hired a certain corporation to investigate whether there are sex differences in physical capabilities.

Guess what they both “discovered.”

7. The Deadly Sin Of Reification: Mistaking models for Reality.

We are in rugged territory here, for the closer we get to the true nature of causation, which requires a clear understanding of metaphysics, the subtler the mistakes that are made, and the more difficult they are to describe. Plus, I have detained you long enough. So I will given only one instance of the Deadly Sin, in two flavors.

It would, I hope you agree, be an obvious fallacy to say that Y was not or cannot be observed, when Y was in fact observed, because some theory X says Y is not possible. Yes?

This error abounds. X is some cherished model or theory, and Y an observation which is scoffed at, dismissed, or “explained” away, because it does not accord with theory.

This happens in the least sciences, like dowsing or astrology, where practitioners reflexively explain away their mistakes. But it also happens with great and persistent frequency in the greatest sciences, like physics.

The most infamous example of Y is free will. There are, of course, subtleties in its definition, but for us any common usage will do. We all observe we have free will: choices confront us, we make them.

Yet certain theories, like the theory of determinism, which says all there is is blind particles obeying something mysteriously called “laws”, proves free will is impossible. It does, too. Prove it. If we accept determinism. Which many do.

Because scientists are caring people, and want what’s best for man, saying determinism makes free will impossible leads to an endless series of papers and articles with this same profound, and hilarious, message: if only we can convince people they cannot make choices, they will make better choices! I promise you will see a version of this sentence in every anti-free will article.

It also leads to the current mini-panic over “AI”, or “artificial intelligence.” Which it isn’t: intelligence, that is.

All models only say what they are told to say—a philosophic truth that when forgotten leads to scientism—and AI is only a model. AI is nothing more than an abacus, which does its calculations at the direction of real intelligence in wooden beads, with the beads replaced with electric potential differences.

But because the allure and love of theory is too strong it is believed computer intelligence will somehow “emerge” into real intelligence, just like the behavior of large objects is said to “emerge” from quantum interactions.

I will upset many when I say this is always a bluff, a great grand bluff.

There is no causal proof of “emergence”: if there was, it would be given. Talk of emergence is always wishful thinking, reflecting a desire not to question the philosophy of what Robert Koons and others call microphysicalism, the ancient Democritian idea that everything is just particles bumping into things.

There are alternatives to this philosophy, like the revival of Aristotelian metaphysics, which would do wonders for quantum mechanics if it were better known. Unfortunately, we haven’t the time to cover any of them.

The Deadly Sin Of Reification, the mistaking of models for Reality, is much worse than I have made it sound. It leads to strange and untestable creations like the multiverse and many worlds in physics, and like gender theory, and all that they have wrought.


That’s what I have to say about bad science. Maybe I’m wrong. So I’ll end with the most frequently used scientific words: more research is needed.

Subscribe or donate to support this site and its wholly independent host using credit card click here. For Zelle, use my email: matt@wmbriggs.com, and please include yours so I know who to thank.


  1. Hagfish Bagpipe

    Action-packed speech. Jack Crabb, small and large, even pops in to comment, ironically, on quantification.

    A nice, compact rhetorical punch-up, Briggs, that should certainly knock the wind out of Certainty’s over-blown sail.

  2. Briggs


    You will have noted the tie. The pocket square was obscured by the badge.

  3. Anyone actually conversant with the philosophy of science would know that Popperian falsification is a necessary, not a sufficient condition. The false description of it presented here is a result either of ignorance or dishonesty.

    Anyone with regular contact with working scientists would be aware that they occasionally invoke the actual Popperian principle, and essentially never the false version presented here, which would be ridiculous. To suggest otherwise is either a sign of insulation from the actual activities of science, or dishonesty.

    Most of the rest of this talk is more of the same. The points about p-values and reproducibility are justifiable, but nothing new. These criticisms date from the 1950s, and from the inventor of the p-value himself, who warned how it could easily be misused.

  4. Briggs


    You’ll notice it’s only working scientists who still stick up for Popper. Most philosophers have moved on, the number of critiques having a reached a size that cannot be ignored.


    Scientific Irrationalism: Origins of a Postmodern Cult
    Popper and After: Four Modern Irrationalists
    Against the Idols of the Age

    All by David Stove.

    Or see:

    What Science Knows: And How It Knows It

    by James Franklin.

    Or search here for “falsification”, “falsifiability”. Many articles.

    Who was that said the old guard always dismiss ideas they don’t like with “It’s all old”? Or “I did this twenty years ago”?

  5. Incitadus

    “Now these models were from the so-called soft sciences: sociology, psychology, education and the like. It’s not surprising there are frequent errors from these fields because of the immense and hideous complexity of their subject.”

    Scoff if you must but it is from these fields of inquiry the world is currently experiencing
    the greatest human transmogrification in history. Witch burnings and genocide to follow
    all that is old is new again.

  6. @Briggs, for something to falsify there must be something satisfied by the same statement, otherwise there is no choice no competition no progress, just babbling (I do not mean you). the other aspect is the grounding of thoughts / ideas (left out in this my response).

  7. 1 – to me the “soft sciences” are not sciences; merely pretenders – like the stolen valor people.

    2 – in the pearls before swine category: I forwarded links to the video to some people who might benefit but will probably quickly stop watching because they’ll find the content “discomforting”. (Sometimes I feel like a 2nd tour guy facing a newly minted half lt with a map.)

    3 – you are (of course 😉 ) wrong about determinism. When you toss a coin the outcome is 100% determined by the forces acting on the coin – we don’t know whether it’s heads or tails until after it lands, but that does not mean either outcome is possible – it just means we lack the information needed to predict the outcome with the same certainty reality does. 0 < p < 1 is a range defined by a lack of information, not by multiple possible outcomes or free will.

    4 – in response to your column here yesterday I wrote a response that went on a bit much(ly). In my defence, I did use the word "emergence" – twice! – see winface.com/oldwin/sr.html Have a look – you may find it relevant to the discussion here over the last little while.

  8. Uncle Mike

    Exceptional! Required reading. Watching, eh, maybe not, but the paper should be read by everyone. Make it the law.

  9. Not Buying It

    Maybe it’s just bias confirmation, but reading your articles always relives the tension I feel when confronted with the “real world according to science”. I wander around feeling lost and frustrated. “Is it me? Am I crazy? Maybe I’m just a contrarian and they are correct? Maybe sodomy is a good thing and women CAN be men?” While that could be true, at least you give me a plausible explanation as to why they might be wrong instead. And a moment of relief.

  10. > These were large and serious efforts to attempt to duplicate original experiments in the social sciences, psychology, marketing, economics, medicine and others.

    One notes the absence of: physics, chemistry, biology, geology, engineering, math; i.e. what most people think of when they think of STEM. Yes there’s some bad research and some fake results therein, but they’re exceptions.

    Medicine “science” (medicine itself is a practice) has evolved from “clinical research,” i.e. “[one] patient had these symptoms, we did this, it worked,” to “experimental” medicine of the “33% of sample showed marked improvement; 33% of the sample showed no change; and the third rat died” variety. That, plus the incentives (monetary, not academic) to fake data in order to get $$$ from Pharma, have predictable results.

    Marketing, a. field I know a little bit about, is full of papers reporting market research results, which as any market researcher knows aren’t “laws,” they are measurements of temporary market conditions and therefore don’t replicate when the market conditions change. (There are also other problems with the field.)

    Economics, when it’s not making the same error as marketing, is plagued by the problems of social sciences (next) and by physics-envy, where economists try to pretend that models of spherical cows in a vacuum are actual descriptions of reality, to widespread acclaim in their profession and even more widespread derision outside it. And they get all the problems of:

    Social sciences, including psychology, are mostly political fields with minimal (if any) predictive ability, where fitting in with the current dominant paradigm is key to personal success, and since they aren’t really used for any real-world decision support, rather are cherry-picked for support of positions already decided, they are exempt from the validation real (not “hard,” real!) sciences get from their engineering applications.

    (Preempting the predictable: Sabine Hossenfelder:Physics::Richard Dawkins:Biology. I’d point interested parties to Lubos Motl, but unfortunately he nuked his blog.)

    Every time someone uses GPS they’re validating special and general relativity; every chip made today validates advances in materials chemistry from the last decade; engineered bacteria that make biochemicals much more cheaply than extracting them from animals or plants validate molecular biology advances from the last few years; and in case no one noticed, we have an autonomous robot helicopter flying around on Mars and just sent a robotic probe to explore the moons of Jupiter.

    Science is fine.

  11. Don Newmeyer

    I agree that peer review had the essential flaw that it inhibits true innovation, but in my own experience it has also helped reduce certain errors in methods and conclusions, e.g. missing controls, improper statistics etc.

  12. philemon

    I had to look David Stove up, as I left philosophy of science off with Feyerabend: Science is as Science Does.

    That is, the (internal) argument’s the thing. If you (a philosopher) thinks there’s pseudo-science afoot, then join the debate. Apparently (from, *cough* Wikipedia), Stove didn’t like idea that logic is the study of deductive inference. Inductive Logic never gelled for me, as in how it might differ from deductive statistical reasoning, but I’m always open to new lines of inquiry.

    I’m not clear at all what Brigg’s point was. Philosophers have moved on? So? If scientists find Popper (and maybe Lakatos) helpful, then… Yay Popperians!

    Let’s not mention Soros and his gross expropriation of Popper’s political philosophy.

  13. Robert Berger

    Take Climate Change
    You can be totally convinced something is wrong but
    not think it is fixable
    not think it is the best use of money
    not think the basis is correctly stated
    not think that it is moral to force Africa to abandon fossil-fuel-needing progress

    Any two-sided view of any issue is autmoatically unscientific

  14. Edmund Hurlbutt

    Returning, in light of this, to the question of evolution that you have dissected many times, I’m not convinced of your analysis. 1) I get it that there’s no such thing as antecedant “probability” — although, it seems to me knowing the full, already existing data of possible outcomes, one can calculate the “odds,” as in drawing a straight flush. But 2) you simply assert that — intra-species — “evolution is a fact,” an already existing, observed reality. But on what basis? Observation? 3) So far as I have read, intra-species “evolution” does occur, has been “observed” as varied environments favor increased expression of certain species traits — already — included in the genome. 4) Meanwhile, although a sequence of actually different species has been observed (e.g. in the fossil record) “evolution” in the sense of a new, factually, actually different species coming from a prior different has not been observed (e.g., as I understand it, the “transitional” forms just keep not showing up!) 5) Instead, as per the Cambrian explosion, e.g., just the opposite is observed. Thus 6) to my mind, “evolution” in the fuller sense — from one species to the another — is not a “fact,” has nor been actually “obvserved,” and thus cannot stand as the starting point (as I understand it to be) for your entire analysis of the matter, including your rejection of ID. Which may be why 7) We never get an explanation — or, as you rightly point out about science in every case, merely a description — of “evolution” that does not invariably reduce to the “Accident of the Gaps!” 8) Moving on, I also find uncovincing your resort to a plinko/pachinko board as an explanation/description of intra-species evolution, as well as your means of rejecting ID as “trivially” true , and for multiple reasons. 8a) Your planko/pchinko analogy takes for granted that the ball survives to make it to a given slot. (I must add here that I’m not quite clear on whether the ball is the organism, or the slots are the organism, or perhaps the slot is the adventageous environmental niche), but in either case, why do presume that the oranism itself survives INTERNALLY in the first place whatever internal (i.e, genetic) changes that “lead” — “force,” as you say — it to become a different species? On what basis? 8b) Which leads to the really challenging two questions (for you), first that the genetic code has all the characteristics of, and operates as, as language: a “fact” is so overwhelmingly observed, analyzed, etc. that if the genetic code cannot be classified as a “language,” then nothing can: and second, that the invariable observation is that languages are always the work of an intelligence. Always. 8c) This language is also so astoundingly complex, massive, detailed, and incomprehensibly (well almost) lengthy — even in the biology of a single cell — that any “change” to the code not inserted in the code by the intelligence behind, is overwhelming likely — by the odds, which can be calculate, not the probability, to result in incoherence: and thus internal dissolution regardless of the environment. And to reiterate, even granting that probability does not exist, odds do exist and can be calculated. Or put another ALL we know about H or E (I forget which is which in your dismissal of probability) DOES include the observable genetic language of even a single cell. And thus the odds (as opposed to any antecedant “probability”) of a change that does not result in the dissolution of the actually existent individual can be calculated. (Or at least theoretically so, although the calculations required would seemingly be, well, incalcuable!) 8d) The rigorous, observed almost invariable genetic fidelity-to-type of organisms as they reproduced, coupled with the overwhelming odds (NOT probability) against any code-change not being destructive to the internal coherence, thus life, of the organism — regardless of its environment — renders statistically dubious the resort to “unimaginable lengths of time” as the “explanation” of intra-species “evolution.” It’s rather like an Accident of the Gaps combined with a You Can’t Imagine How Long game of three card “scientific” monte. What’s more, as the Cambrian Explosion shows by observation, “unimaginable lengths of time” are precise not how all modern body types came into existence. So 8) Given all this, the very existence of this incomprehensibly specific, intricate, stubbornly perduring language/word, along with the fact that the odds (not probablility) can be (theoretically) calculated because we can measure the length and complexity of the “word” of any given species, then we can (theoretically, since the calcuations would be so lengthy) calculate the odds that this “word” could possibly “accidentally” (whatever that means — and, in fact, it means nothing, as you say, or rather “we don’t know”). And thus the odds of this language and its genetic-species expression both a) existing without an intelligence; and b) by (a-intelligent) “force” producing this incalculably lengthy, precise “word” become absurd.

  15. Edmund Hurlbutt

    VERY BIG OOPS!!!!!!!! Point 2 should read “INTER-species evolution,” NOT “intra-species evolution”

  16. Dieter Kief

    Wonderful. Thx.!
    Posted it on X.

Leave a Reply

Your email address will not be published. Required fields are marked *