One of the key fallacies of scientism, in the sense of being the most destructive to common sense and personal wellbeing, is to suppose that any theory put forth in the name of science is therefore true, or certain enough to believe as true. The posited theory is, after all, scientific and, so scientism says, there is no better recommendation to truth than this.
This fallacy is field dependent, cropping up in some areas much more frequently than in others. It is rare, though still frequent enough in a mild sense, to find the speculations of chemists being refuted each new generation. But it is common as daylight to find the hypotheses put forth by sociologists, economists, and psychologists refuted not a generation after they are published, but often in the next issue of the same journal.
For example, the Weekly Standard’s Andrew Ferguson’s reviews the bad science and scientism behind the recent spate of experimentation “proving” conservatives are dumber/more inflexible/less compassionate/etc. than liberals, theories which are collected in Chris Mooney’s hagiography to scientism, The Republican Brain. (We began a collection of these studies here; please contribute. Also see Mike Flynn’s take on this.)
The studies rely on the principle that has informed the social sciences for more than a generation: If a researcher with a Ph.D. can corral enough undergraduates into a campus classroom and, by giving them a little bit of money or a class credit, get them to do somethingâ€”fill out a questionnaire, letâ€™s say, or pretend they’re in a specific real-world situation that the researcher has thought upâ€”the young scholars will (unconsciously!) yield general truths about the human animal; scientific truths.
Although he didn’t intend it, Ferguson shows us another fallacy, which is that, in virtue of their being generated by scientists, that “scientific” truths are better than other kinds of truths, say metaphysical or logical truths. Stating it so plainly makes it obvious that if a truth is a truth, then it is a truth, and a truth is not more “truthy” because it comes from a scientist than a truth which comes from (say) a theologian.
The main problem with this summary is that often scientists use the word truth to mean “a belief which is probably but not 100% certainly, no-matter-what true.” That later creature is not a truth at all; it is a conjecture and nothing more. A conjecture which is “almost” true, or “for all practical purposes” true, is still a conjecture and not a truth. A truth is only true when it always is, when it can be deduced, when it arises as the end result of a valid argument. That is, conjectures when they are conjectures are not truths, but conjectures might become truth as new evidence arises.
Physicists make the mistake of confusing truth and conjecture just as often as sociologists and psychologists, only the physicists’ conjectures more often turn out to be truths as that new evidence arrives, so their error is of less consequence. Note that it is an error (a fallacy) to say, given evidence less than deductive, that a conjecture is a truth. The error will turn out to be more or less harmful depending on to what use the conjecture is put. If one is betting that a protein will fold a hypothesized way, one has turned a conjecture into a forecast, which is a term that acknowledges a conjecture is less than a truth.
If the conjecture turns out true, because of new evidence from an experiment, then the conjecture turns into a truth and gains are made. If the conjecture turns out false, we again know this based on new evidence, and loses are suffered. The loses are of particular interest: these are less the closer the conjecture is to the truth. Which is why the loses are greater in the soft sciences: their conjectures are much more often farther from truth.
One reason for the difference is that physicists more often than sociologists test their conjectures against reality. Another reason is that the evidence for a conjecture for the hard sciences is not just statistical, as it often is for soft-science conjectures. And any conjecture which relies primarily on statistics—given, that is, how statistics is practiced today—should not be trusted.
I’ll insert my usual plea that soft scientists act more like their hard (knock) brothers. Do not just assemble a one-time shot of data and compute some statistical model and tell us how well that model fits your data, and then assume because this fit is “good” that therefore your conjectures are true. This is formally a fallacy and is the weakest kind of evidence there is, but (almost universally) the only kind which is offered.
Instead do two things: (1) find very effort to discover evidence which refutes your conjecture (and then tell us abou it). And (2) as hard scientists (often) do, make predictions of data you have never before seen. If you do both these things, then you can ask us to believe your conjectures. Otherwise, keep quiet.
Thanks to the many readers who pointed me to Ferguson’s article, including Mike Flynn and Doug Magowan.