The Real Anti-Science Crowd

Testing another conservative brain.
Testing another conservative brain.

People are starting to notice that not all is well in Science. This is happening not only among scientists themselves, not only among Realists like the readers of this blog, and not only among neo-cons like those at the Weekly Standard (see below), but also among stalwart progressives such as Chris Mooney at the Washington Post.

I was shocked to see the article “Liberals deny science, too” by Mooney. Of course, he’s very forgiving and doesn’t really throw himself into his subject. He writes only about a narrow new study:

The study is far from the authoritative word on the subject of left wing science denial. Rather, it is a provocative, narrow look at the question. In particular, the study examined a group of left wing people — academic sociologists — and evaluated their views on a fairly esoteric scientific topic. The specific issue was whether the evolutionary history of human beings has an important influence on our present day behavior. In other words, whether or not we are “blank slates,” wholly shaped by the culture around us.

Lot of blank slaters prefer to believe in the ultimate perfectibility of man. The new study wasn’t really: it was only yet another in a long line of questionnaires passed off as research.

The new study, by University of Texas-Brownville sociologist Mark Horowitz and two colleagues, surveyed 155 academic sociologists. 56.7 percent of the sample was liberal, another 28.6 percent was identified as radical, and only 4.8 percent were conservative. Horowitz, who describes himself as a politically radical, social-justice oriented researcher, said he wanted to probe their views of the possible evolutionary underpinnings of various human behaviors.

And there it is: social-justice oriented researcher. Nothing pro-science about that. Activists at universities are nothing new, but they are growing in strength and number. One article notes “It is not uncommon for social psychologists to list ‘the promotion of social justice’ as a research topic on their CVs, or on their university homepages.”

Science is not the goal of an activist, who decides his findings in advance and only collects what he thinks is confirmatory evidence which he can share with the world. Regular readers already know about the routine asinine uses of statistics (I really have to update this list).

Andrew Ferguson has a relevant piece “The New Phrenology: How liberal psychopundits understand the conservative brain.” We’ve long noted electronic phrenology devices are ubiquitous among researchers (see this, this, this, this, and many more).

Ferguson (where have you heard this before?):

The studies rely on the principle that has informed the social sciences for more than a generation: If a researcher with a Ph.D. can corral enough undergraduates into a campus classroom and, by giving them a little bit of money or a class credit, get them to do something—fill out a questionnaire, let’s say, or pretend they’re in a specific real-world situation that the researcher has thought up—the young scholars will (unconsciously!) yield general truths about the human animal; scientific truths. The scientific truths revealed in Edsall’s “academic critique of the right” demonstrate that “the rich and powerful” lack compassion, underestimate the suffering of others, have little sympathy for the disadvantaged, and are far more willing to act unethically than the less rich and not so powerful.

Here’s the kicker, which regular readers will also recognize:

After many regression analyses and much hierarchical linear modeling, the professors discovered that their conclusion matched their hypothesis…

Science is now, in many areas, just another branch of politics. We’re coming back to this topic later, of course.

Oh, and don’t forget the biggest science denial: global warming. The theory of CO2-enhanced positive feedback which motivates most climatologists has been incorporated into all major climate models. These models have been making lousy predictions (of the future) for twenty to thirty years now: the models have ran hot and are running hotter. This implies the theories which underlie these models is in error. Scientifically, therefore, it is best to doubt the veracity of both the models and the theories. Anybody denying this, in ignorance of the physics driving the climate, is anti-science. Most anti-global-warming-science folks are progressives. This is because they believe in the solution to global warming and are largely ignorant of physics.

Update When Charles Murray was asked about the 20th anniversary of The Bell Curve, this is what he said.

I’m not going to try to give you a balanced answer to that question, but take it in the spirit you asked it–the thing that stands out in my own mind, even though it may not be the most important. I first expressed it in the Afterword I wrote for the softcover edition of “The Bell Curve.” It is this: The reaction to “The Bell Curve” exposed a profound corruption of the social sciences that has prevailed since the 1960s. “The Bell Curve” is a relentlessly moderate book — both in its use of evidence and in its tone — and yet it was excoriated in remarkably personal and vicious ways, sometimes by eminent academicians who knew very well they were lying. Why? Because the social sciences have been in the grip of a political orthodoxy that has had only the most tenuous connection with empirical reality, and too many social scientists think that threats to the orthodoxy should be suppressed by any means necessary. Corruption is the only word for it.

Now that I’ve said that, I’m also thinking of all the other social scientists who have come up to me over the years and told me what a wonderful book “The Bell Curve” is. But they never said it publicly. So corruption is one thing that ails the social sciences. Cowardice is another.

Update Here we go: The American Sociological Association sets up a “task force” on global warming. Not a physicist among them. On the other hand, maybe they can do “studies” like this one: “Science or Science Fiction? Professionals’ Discursive Construction of Climate Change”. Turns out “the” Consensus, even after the standard political throat clearing, isn’t as strong as advertised. Who knew?

Update Why Some of the Worst Attacks on Social Science Have Come From Liberals. Author is part of the problem. Why? She doesn’t realize the problem is in these two sentences:

When Dreger criticizes liberal politicization of science, she isn’t doing so from the seat of a trolling conservative. Well before she dove into some of the biggest controversies in science and activism, she earned her progressive bona fides.


  1. Really? You said: “Most anti-global-warming-science folks are progressives.”

  2. That was certainly a representative sampling—and I mean that sincerely. It shows using academics does indeed slant the world toward liberalism and radicalism.

    One should really tell these people research is not just surveys. If it’s surveys, it’s questionable “science”. As noted by the use of “many regression analyses”. In other words, statistics get the answer you want if you torture the data long enough. Not science.

    I note that even the article in Sage could not avoid the conspiracy of oil company theories. Lewendosky studied the wrong group…..

    All of these “studies” on sociology and psychology in climate change are Marketing Surveys. They are in no way studying the accuracy of claims, just the salability. (The Sage article may not be totally in that realm. I have not read it completely through yet, so I don’t know. Will read it after I complete my errand this AM. 🙂 )

  3. Joy

    ‘The possible evolutionary underpinnings!’

  4. Gary

    Don’t go championing physicists as the solution to progressive global warming analysis. Michael Mann trained as a physicist. Besides rampant progressivism, the problem is too many physicists who don’t know biology, chemistry, and geology and can’t incorporate the real world into their models.

  5. If a climate calculation gets a climate prediction wrong, then, according to your understanding of how science works, the theories that underlie the calculation must be wrong. Since all these calculations use (just for example) the theories of motion that we call “Newton’s laws”, those must be wrong. Not just wrong in the sense of not being relativistic or quantized, but wrong in their appropriate domain of applicability.

    Another possibility is that you’re struggling with a conflated confusion of model, calculation, prediction, and theory.

  6. Steve E

    “Not a physicist among them.”

    That explains how they came to this conclusion:
    “It is now well-established that the primary driving forces of global climate change are based in institutional relations and cultural beliefs.”

    And this gem:
    “…it [Sociology] has a great deal to say about the origins of climate change in social practices…”
    Presumably these “social practices” include staying warm; growing, preparing, distributing and eating food; getting from point A to point B etc. Although, with the sociology crowd, that presumption might not be well grounded in social systems, social relations or social practices.

    Sheesh Briggs, with this example you’ll have to change the title to “The Real No-Science Crowd.”

  7. Scotian

    Lee Phillips,

    “Since all these calculations use (just for example) the theories of motion that we call “Newton’s laws”, those must be wrong.” Etc

    I believe that you know that this is not true. For example, I might try to calculate the ballistic trajectory of a shell fired from a battleship using basic Newtonian physics but find that I keep missing the target. After some thought I realize that I have forgotten to take into account the rotation of the earth (Coriolis effect). My oversight does not call into question the physics, just my use of it. This is the type of error that Briggs accuses the climatologists of. In their zeal they have overlooked something, as he and colleagues have shown in their recent paper. In conclusion, their model is inadequate.

  8. AureliusMoner

    @Lee Phillips

    Is it really possible that you do not grasp the error in your statement? Or, are you simply throwing empty rhetoric, despite knowing how absurd your statement is?

    In either case, you are an exemplar of why “science” has erred so gravely.

    (Here’s a hint: if Newton’s Laws consistently failed to correspond to what actually happens, they wouldn’t be called “Newton’s Laws,” they would have been thrown out.)

  9. FAH

    A slightly long comment.

    My impression is that the methods of science are fairly healthy within the fundamental sciences such as physics, astronomy, chemistry, and biology, except for the penchant in some areas of physics to stray into metaphysics. Disciplines that feel the need to tack the word “science” at the end of the name of the field strike me as sometimes trying to cloak what is done with an aura of science without the body of the discipline, usually when there are political implications of the work. Climate science and social science are two examples. (I am less critical of library science, earth science, engineering science and computer science, since they generally are less related to political controversy.)

    It is hard to convey to non-physicists that climate science is not consistent with physics. Advocates of CAGW are fond of saying that it is “just basic physics”. In truth, it is not physics at all. Physics has much in common with accounting. Physical theories can be viewed as a “chart of accounts” with which the details of the behavior of the system (“the books”) must accord. In physics these are often Hamiltonians or Lagrangians in dynamical theories (or various micro- or grand-canonical ensembles in thermodynamics). From these descriptions the differential and integrated behavior of systems described can be specified to fairly high accuracy. For example, a quantity such as “9.8 meters per second squared” can be derived from a variety of ever more comprehensive theories of gravity for systems of relatively low (astronomically speaking), positive mass-energy. Deviations from quantities in these theories are reconcilable to parts in many orders of magnitude and traceable to deviations in the relevant potentials (for example deviations in the earth’s mass distribution) or poor choices of coordinates, such as with a rotating frame, which leads to so-called pseudo-forces such as Coriolis. Climate science has not gotten there yet; the “chart of accounts” is not sufficient for accurate book keeping. A list of possible “forcings” and “feedbacks” is not a theoretically consistent thermodynamic description. Without such a theory, there is no way to determine if the outputs of models based on such lists are complete or, equally problematic, internally cross-correlated. Both lead to unphysical results.

    There is yet no foundation for the physics of a global thermodynamic theory. Climate science (like most applied sciences) draws concepts from physics and tries to apply them to a specific system, but that is not the same thing as working within a demonstrated physical theory. (Astrology also draws concepts from physics (planetary and stellar orbits and positions) but comes to its own conclusions on cause and effect outside of a demonstrated physical theory.) What is understood as the climate is a complex, inhomogeneous, non-equilibrium system. There is currently no way to encompass such a system within a consistent theory based on physics. Work has been underway to establish such theories, so far without resolution. One promising notion (in my view) is based on a hypothesis of Maximum Entropy Production (MEP), but it is far from convincingly demonstrated (see the work of Axel Kleidon and others in the literature). Other theories try to view climate as a system of coupled Carnot engines operating between local temperature differences (not averages), but these approaches are only speculative (see the work of Valerio Lucarini and others in the literature). An example of what this means is that the so-called “global average temperature”, which we can denote T, is not a temperature at all, at least in the sense physics understands thermodynamics. In physics, temperature is an intensive property that describes the differential behavior between a total system entropy and its energy. The average of the temperatures (at given times and places) of a collection of interacting systems is not a temperature (based in physics) for the collection of systems. Its relation to the behavior of the total system is ill defined and arbitrarily a function of the method of averaging. Different averaging methods over differing times and spaces of the same underlying data can find differing trends of “warming”, “pausing”, or “cooling”, none of which have any theoretical relation to the thermodynamics of the total system. At best, T is an index, similar to stock market indices of volatility or fundamentals, which tradition may link to expectations, but it has no demonstrated, theoretically established (causal) relation to underlying behavior of the whole system. Thus asking one to invest money based on an index such as T is like the stock broker asking one to invest in stocks based on his market indices, when the broker makes money whether one wins or loses on the investment. (I think these last points are familiar to readers of this blog.)

    Consensus is a particularly insidious aspect within science. Feynman famously said “It doesn’t matter how beautiful your theory is, it doesn’t matter how smart you are. If it doesn’t agree with experiment, it’s wrong.” He gave a wonderful commencement address, 1974 at Caltech I think, focused on integrity in science, in which he gave an example of the dangers of consensus (which we might now call confirmation bias). It went like this:

    “We have learned a lot from experience about how to handle some of the ways we fool ourselves. One example: Millikan measured the charge on an electron by an experiment with falling oil drops, and got an answer which we now know not to be quite right. It’s a little bit off because he had the incorrect value for the viscosity of air. It’s interesting to look at the history of measurements of the charge of an electron, after Millikan. If you plot them as a function of time, you find that one is a little bit bigger than Millikan’s, and the next one’s a little bit bigger than that, and the next one’s a little bit bigger than that, until finally they settle down to a number which is higher.
    Why didn’t they discover the new number was higher right away? It’s a thing that scientists are ashamed of—this history—because it’s apparent that people did things like this: When they got a number that was too high above Millikan’s, they thought something must be wrong—and they would look for and find a reason why something might be wrong. When they got a number close to Millikan’s value they didn’t look so hard. And so they eliminated the numbers that were too far off, and did other things like that.”

    He ended that talk with his wish for the attendees: “So I wish to you—I have no more time, so I have just one wish for you—the good luck to be somewhere where you are free to maintain the kind of integrity I have described, and where you do not feel forced by a need to maintain your position in the organization, or financial support, or so on, to lose your integrity. May you have that freedom.”

    In the absence of definitive controlled experiments, it is not wise to trust consensus. A famous example from history is the state of physics in the mid-19th century concerning the aether. Tremendous progress had been made unifying electricity, magnetism, sound, and hydrodynamics into a satisfying wave view of propagation of energy. The consensus for decades was that the universe had to be filled with an “aether” which must have unusual properties to be able to carry light waves in space. Much of 19th century physics (the consensus) was devoted to experiments designed to test the properties of aether and determine its effects on the universe. The consensus was that light was carried by the aether. Throughout the 1880’s Michelson and Morley worked on interferometric experiments designed to measure the aether’s effect on light velocity at different points in the earth orbit. For several years, they thought they had confirmed the aether. Finally, after much attention to experimental errors (and one nervous breakdown), they conclusively demonstrated a negative result. There was no aether. It took another decade or so for the “consensus” to adjust, but the result was first special relativity, then general relativity and our modern understanding of force propagation.

    Inject statistics into the mix and true danger comes. I worked for a time with some excellent statisticians at a place called Bell Labs. They gave me what I have come to view as “tough love” with respect to use of statistics in physics. In particular, the main thing I remember is that statistics is much better at telling you what you cannot infer from data than what you can infer. I have come to view statistics somewhat as I would one of the Rings of Sauron, perhaps not The One, but at least one of The Nine. It can tempt a scientist with its power until he or she is completely blinded by it and spouting utter gibberish. One notion I remember they imbued in me was that the more one tests models against data, the less significant one should view the results. I had trouble with it then and I have sought some literature on the notion off and on without success. The basic idea is that if one tries 20 models to fit to some data and the 20th attempt fits well, then one should downgrade the formal characterization of the goodness of fit to take into account the number of trials it took one to get there. Somewhat like the effect of the observer on the outcome of an experiment or that the information content of the data is greater the less one looks at it. Presumably the best result would be the first and the significance of the result becomes vanishingly small as the number of trials increases, even though the formal descriptive statistics output indicate significance. I would still like a reference on this notion, but I don’t know how to search within the statistical literature efficiently. In any event, I am still convinced that statistics is a dangerous temptation to a physical scientist.

    One final note. I mentioned that a slightly dangerous occurrence in physics is to venture into metaphysics. It is not so evident in specialties such as solid state or plasma physics but finds its way often into relativity and quantum mechanics much more. A good example is found in recent results on quantum entanglement and the so-called holographic principle. A readable viewpoint piece appeared in December, entitled “Closing the Door on Einstein and Bohr’s Quantum Debate” by Alain Aspect, available at
    The thrust of the issue is what is called “spooky action at a distance” or the ability of events to influence events far enough away that the influence travels faster than light, essentially instantaneously. A number of experiments have been done testing whether this happens and it appears that it does, although a variety of “loopholes” have been proposed by which it could be explained without instantaneous propagation. I think the article should be fairly readable to non-physicists, but the fun part comes at the end, where he considers the “free-will loophole,” based on the idea that everything that happens in the present can be somehow correlated with something that happened in the past. I think philosophy junkies might get a kick out of the article.

  10. Ray

    The Oxford English Dictionary defines the scientific method as “a method or procedure that has characterized natural science since the 17th century, consisting in systematic observation, measurement, and experiment, and the formulation, testing, and modification of hypotheses.”

    The climate scientists believe that if the data doesn’t support the hypotheses you don’t modify the hypotheses, you modify the data.

  11. Scotian,

    Good analogy. If Newton fails to predict, is Newton wrong, or should we ask if we are in a non-inertial reference frame?

    Lee might also be forgetting that models have factors that are modified to fit past data. Bad predictions forward (based on those factors) clearly invalidates some part of the model. Either the factors themselves or the selected fidelity and interactions at place in (or not in) the model, or some combination. This is modeling 101, though.

    If I messed up the gravitational constant in an orbital simulation and got bad predictions of a trajectory I wouldn’t throw out all of orbital mechanics.

  12. Lee Phillips: No, no one is saying the laws of motion or the laws of thermodynamics or whatever physical law that is being used is “wrong” in the sense you seem to be suggesting. Global warming is based on models using the idea that CO2 is a greenhouse gas and that it is the primary driver of climate. That second part of this theory is the problem (some would argue the first is too). Climate is complex and global warming predictions are based on models that have many estimations and data that has been “adjusted”. The global warming theory is not correct if CO2 continues to rise and the temperature does not. CO2 is not the primary driver in this case. That is the theory that is clearly wrong. Temperature is not continuing to rise, unless statistical gymnastics are performed with the data, as in the case of Karl and his ocean temperatures proving warming is still happening. Whether or not Karl’s study is valid, the amount of temperature rise is wrong in all cases. If the amount is wrong, the theory is wrong.

  13. Scotian (and others):

    “I believe that you know that this is not true.”

    Yes, that the whole point. The calculation has many ingredients. If it fails, we don’t know which ingredient is bad. Pretending we do is politics, not science.

    Here it is symbolically:

    A ^ B ^ C ^ D ^ …. => X

    Briggs analysis amounts to ~X => ~C, where C is the thing that he happens to want to be false. This is an error in logic: the third one that I’ve noticed our host making in just a few weeks. Political passions do tend to dull the reasoning faculties.

    (In the above ~X means “not X”, that X is false. The ^ signs represent logical conjunction. The => sign is logical implication.)

  14. FAH

    Sheri, you mentioned Karl et. al., I think in response to someone else. Given my views on global average temperature time series I think time spent with them is somewhat wasteful of intellectual energy, but like the scorpion in the scorpion/frog story – I can’t help it sometimes, it is in my nature.

    The action in Karl et. al. is in adjustments to the sea surface tempratures, SST. Let’s look at the Karl et. al. SST adjustments a bit. The supplemental material is available online and describes what they did. The second paragraph of the SST section states they give the buoy data more weight, as they should since the buoy data is both more accurate and less noisy. However, they do this only after they have “adjusted” the buoy data, artificially warming it +0.12 deg C to agree with the less accurate and more noisy ship data. In the first paragraph, they state “To make the buoy data equivalent to ship data on average requires a straightforward addition of 0.12°C to each buoy observation.” So the “additional warming” is exactly the artificial “warming” added to the buoy data. As stated below in that section, satellite data of sea surface temperature has also been deleted because it adds cooling, more in agreement with the unadjusted buoys.

    It is well established that the buoy data is intrinsically more accurate and less noisy (otherwise the relatively expensive buoy program would be a waste of money), e.g. from Emery and Schluessel 2001, available online at:

    [the link makes the system think my comment is spam so you will have to find the link yourself via googlescholar – but it is available online]

    “Turning to a comparison between drifting buoy and ship SSTs, we present in Figure 8 the differences between the ship and drifting buoy SSTs again as a function of separation distance. The positive mean difference of 0.28°C is consistent with the observation that ship injection SSTs are slightly warmer due to the heating in the engine room where the observations are made [Saur, 1963]. The ship-buoy RMS difference at l.8°C is about twice the size of the drifter versus drifter RMS difference and is also consistent with the fact that ship SSTs are found to be a lot noisier than buoy SSTs [ Kent et al.,1993; Kent et al., 1999; Kent and Taylor, 1997]. The visual reading of the injection SST and the recording of the ship SST introduce some of this variability by hand.”

    And later summarizing,

    “The inherent accuracy of these buoy and ship measurements was explored using a unique computation of temperature (SST) differences as a function of separation distance for buoys or ships reporting within the same hour. For drifting buoy data and separation intervals between O and 50 km the mean difference is – 0.05°C with an RMS of – 0.4°C. The statistics for the ship SSTs result in significantly larger errors (mean of about -O. l 5°C and RMS difference of – 1.2 °C) as would be expected from the less homogeneous ship temperature sensors most of which are not calibrated and whose analog measurements are not regularly checked for calibration.”

    Adjusting measurements known to be more accurate to agree with measurements known to be less accurate is not common practice in physics, astronomy, chemistry, or biology.

    Now, why is this important to the trend? First, if it does not affect the trend, then why bother? Why deliberately adjust accurate data to agree with less accurate data? The answer of course is that it does affect the trend. (Although there is an apologist current trying to argue that it does not affect the trend, but if it doesn’t again, why bother?) The whole thrust of the Karl et. al. analysis is to affect the trend. The SST adjustments do this because the ship data essentially introduced the warming after WWII on an increasing basis and the warming trend depended on that. As the ship data became corrected by the buoy data in the more recent decades, that artificial warming was corrected and the trend began decreasing. The ship data (taken from the Voluntary Observing Ship program) is described in Kennedy et. al. 2011, available at

    [again the link is rejected but you can find it on googlescholar]

    Looking at figures 1 and 2 shows the steep upward trend in numbers of ship measurements post WWII and the decreasing trend coming into the hiatus time frame. A major reason for the decrease is that much more precise measurements have been coming on line in the modern era, namely buoys and satellites and the fact that the ship observations were voluntary and poorly controlled. This is seen from NOAA’s buoy program at

    [at the website under /phod/dac/dacdtat.php]

    Note the figure in the upper right corner on buoy years of data. And in terms of numbers of buoys,

    [ trying this “” ]

    Note the scale on the right gives the number of buoys in service as a function of time. So, as more accurate cooler measurements increasingly came on line, the long term trend would have been reduced by comparison to earlier years, and the short term trend would have been reduced by the increasing inclusion of more accurate, cooler, data. There are a variety of discussions ongoing about the rationale for adjusting the less accurate buoys to agree with the ships and to dis-include the satellites, but the basic fact is not in dispute. If you look at the adjusted ocean trends Figure 1 of Karl et. al. the warming adjustments (just less than 0.1 deg C) are essentially all due to this artificial warming of the buoys. If any adjustments should have been made, the ship measurements should perhaps have been adjusted down. As I said above, the practice in the hard sciences is to obtain the most accurate measurements first, then evaluate the behavior, not to adjust the measurements to agree with what one thinks the behavior should be.

    Part of the difficulty is the fact that such numbers are indexes and not well defined physics quantities in the first place, as I mentioned in my previous comment. This is further complicated by focusing attention and rationales for calculation of an anomaly and its expected behavior rather than an actual physical quantity, as in the adjustments made above to the sea surface temperatures. Trying to understand the time behavior of an index is easiest if care is taken to consistently calculate the index. Unfortunately for climate science the methods of spatial and time averaging keep changing as do the underlying physical measurements. This makes separating index behavior from adjustment behavior much more difficult.

  15. FAH

    I just noticed. In the above toward the end “adjusting the less accurate buoys” should read “adjusting the more accurate buoys.”

  16. Lee Phillips:
    “If it fails, we don’t know which ingredient is bad.”

    You are correct. We don’t know which part fails. Most people seem to very muchly object if someone claims CO2 was not a greenhouse gas so that just leaves that it is the major driver of climate. However, if it’s not a greenhouse gas and it’s not a major player, then the theory is broken. Which part stays and the theory remains valid?

    CO2 may contribute but not be the major driver, which requires a complete reworking of the theory (as should be indicated by the panic when temperatures ceased to rise and every excuse under the sun and including the sun was introduced). CO2 can still be a greenhouse gas, but the theory of CAGW is definately wrong and the theory of AGW is probably correct but the amount and outcome are incorrectly calculated by models and the current theory. So are you saying CO2 can be a greenhouse gas but it just does not matter?

    It is possible that temperatures are rising, humans are causing it but the method is unknown. That is an untested hypothesis, not a theory. Until the mechanism is known, it remains unproven.

  17. Joy

    Is it just me? I can hardly bear to read this sort of psychobabble.

    That quote is so typical of the fuzzy language in fluffy studies.
    Culture, evolutionary history? and genetics all are clearly distinct.

    This above study attempts to blur the lines of meaning and understanding in such a subtle way, either deliberately or due to ignorance, it seems to imply the forgone conclusion.
    Anyway I wonder if they objected to their underpinnings being probed? It had to be said.

    The other yawningly common feature of “studies that show” is The over embellishment of the writing so as to give the impression of high importance or greater truth.
    Less money for science. That’s the answer.

    I am reminded of Gavin Schmitt’s famous glib:
    “Tacit knowledge gets lost in translation with climate modeling.” A telling statement. No thought of the necessity to be clear to the public about the nature or broader significance of climate models.

    If people are becoming jaded by science, it is a necessary phase.

  18. berserker

    “After many regression analyses and much hierarchical linear modeling […]”
    – Truly funny. During my time as a doctoral student in a social “science” discipline at a major university, I realized that both students and most faculty have no understanding of what it is that they are actually doing. It is pure ritual and motivated reasoning masquerading as research. Unfortunately, I possessed neither the self-confidence nor the courage to drop out. My penance has been a refusal to be join a faculty.

  19. MattS


    “My penance has been a refusal to be join a faculty.”


  20. There is a strong faith based belief in the mutability of the human mind (intelligence, personality…) among Western societies, including academia. This is grounded in the moral precept that we should all be equal. When evidence is cited against the belief (e.g., twin studies) this is rejected on the grounds that the evidence is insufficient. There are two problems here: the evidence can always be hand waved away as insufficient. And you cannot hold a belief without evidence until such time as the evidence crosses some ill defined line into ‘sufficiency.’

  21. I love the way you cons mis-frame any argument that shakes your ideology. The reaction The Bell Curve was not that big of a deal, nor was the book, which was typical conservative uselessness. But it’s a great example of the way you guys inflate inconsequential nonsense, and the vast conservative blind-spot for irony.


  22. I walked out of my house Sunday this week and faced my car with windshield partially full of ice. My wife’s car was fully iced. My other car had no ice. Two cars with ice were next to one another. The third car was sheltered by the south side of the house. The first two cars were sheltered by the west side of the house.

    Within 20 ft conditions varied enough to have highly varied outcomes.

    We can point to many factors that can cause these issues. I learned a long time ago that cleaning my windshield was much easier if the wind shield was allowed to cool down before moisture collected on it. Coming home at night in the snow was a dead cinch for a tough time in the morning. The warm windshield melts the snow and turns it into a solid layer of ice.

    Looking across my yard each morning, I see similar things happen without any intervention of internal combustion. Parts of the yard never get sun and never have the ground thaw.

    Attempting to construct a model of my yard entails many interactions with boundary layers. O so many. The boundary layer is not easy to deal with. It cannot be confined. Lenticular clouds are the result of boundary layers acting large. Then we have pipe flow in our homes where it can dwindle to nothing only to have an underpressure cause the entire wall to rattle and the boundary goes entirely across the pipe.

    Always the relative size is changing. If you have played around with the cell size problem with global circulation, you know the conundrum you face. The smaller the size the more iterations you have to do. Going from 100kmX100km blocks to 50km X 50km doesn’t increase the iteration count by a factor of 4. It increases it by at least a factor of 8. The duration of each iteration (the time period that the iteration represents) has to decrease by 1/2. Part of the assumption of the model is that the mass encompassed by the block does not leave the block before it enters the block. If air is moving at 20 km/hr and the block is 100km on a size, the iteration has to represent 5 hours or less of simulated time. If the speed is 150km/hr (hurricanes), then the time iteration has to be << 1hr.

    My house demonstrates to me that there are temperature gradients between my deck and the side of my house.

    When a scientist tells me that their model (GCM) does a good job of predicting the climate and he doesn't get nervous about what I describe happening at my house, I start to get nervous about his actual expertise in the subject.

    Thank you Jersey McJones for continuing to visit. You continual comments have yet to suggest that you have any handle on what is being said here.

  23. Ye Olde Statistician

    @FAH: regarding the number of trials. In a very simplistic and somewhat inaccurate way, suppose you ran an experiment and there was a p-value of 0.05. (Briggs, put fingers in your ears.) That means that if there were no real effect at all natural variation in the measurements would result in a positive result perhaps one time in twenty. Now imagine you have conducted the experiment twenty times. You are almost certain to obtain at least one positive result in the twenty trials even if the actual effect is zero. This is one reason why p-values should be used with more than a grain of salt. The value does not hold for multiple trials.

    Suppose you troll Big Data for ten diseases and ten risk factors and you use a statistical test with a p-value of 0.01. You will very likely find that one of the risk factors is “associated with” one of the diseases. Ellis Ott took this into account when he developed factors for his “Analysis of Means” for industrial trouble-shooting. The factors depended not only on the sample size for each group, but also on the number of groups.
    @Lee: About a hundred years ago, Pierre Duhem pointed out that every scientific hypothesis consists of a large number of hypotheses and when the result is falsified, one cannot always tell which of the contributing hypotheses has been falsified. He mentions a scientific experiment in which the self-same results both proved and disproved the same hypothesis for different physicist because one accepted the concept of pressure of Lagrange while the other accepted the concept of Laplace. This is known today as the Duhem-Quine Thesis of the underdetermination of scientific theories: any finite set of facts can be explained by multiple theories.

    Now, when we say that a model has been falsified when it fails to match reality, we mean that something within the model is wrong. Perhaps a factor is missing; or an extraneous factor has been included. Or there are factors that are mutually correlated (which throws off the coefficients). Or the linkages are wrong. And so on. In any case, theory as a whole (which said these factors linked in this way produce that result) is wrong.

  24. FAH

    @YeOlde: Thanks for helping. I always approach statistics with a fair measure of humility. The application I was alluding to was not one in which the trials were a process of obtaining new data. In particular the idea is that one has a large set of data already, and one has a candidate set of possible variables that one thinks might “explain” the variance. Obtaining more data is not an option. The observer, in this case the statistician (or the statistically uninformed physicist) then attempts to find some combination of the candidate set of variables that “fit” the data, or “explain” the variance. Physical scientists seem to come into such analyses with heavy biases about what should be the cause-effect relations, but statisticians in my experience are much less biased beforehand. Now the statisticians who corrected many of my mistakes were the same ones who in those days advised AT&T on how to present the costs of AT&T as a function of various operational, capital, and market factors and they were well versed in multivariate regression analyses apparently because that is what the Public Utility Commission liked in those days.

    So the situation was this: One has a body of data, say hundreds or thousands of points and for each of these points one has a dozen or more candidate explanatory variables. First one might try a linear combination of the variables one is biased to expect should work, say one third of the total variables. That does poorly. One then tries adding another variable. That does poorly. One then tries dropping one and adding two more. That does better but not well. One then tries all of them. That is a disaster. One fiddles around a bit and finds that some subset of the possible variables are themselves correlated. One then winnows down the set a bit and tries another combination. That does poorly. One then tries transformations of some of the variables. And so on, until finally one tries a combination of variables that “works” in terms of the formal outputs of whatever statistical package one is using. I think in those days we were using one called S. So in this example one might have tried 20 or more models (meaning a set of variables chosen or transformed from the set of possible variables) and finally found one that seemed to “work.” What they tried to tell me is that even though the formal statistical measures, e.g. p-values, etc. for the final fit might indicate a certain level of assurance, that the value of those measures should be downgraded to account for the number of models tried prior to reaching the final one. I thought they implied that there was a mathematical process to adjust the estimates of goodness of fit for the final model to account for the number of times other models had been tried without success.

    So the claim I think they made is that the number of times one “looks” at the same data and tries to find a relation, the less one should trust the final result. Conversely, the first look at a set of data (meaning the first model one tries to fit some data) should be “trusted” more than later “looks”. And more, that level of trust was expressible mathematically. I guess in this case the “trial” would be the observer attempting to fit a model, not actually obtaining data.

    (I suspect the process I described in the second paragraph may be a frequent occurrence in the social sciences.)

  25. Ye Olde Statistician

    find some combination of the candidate set of variables that “fit” the data, or “explain” the variance.

    My cosmologist friend once told me that he had learned in his graduate statistics course that with seven factors you can fit any set of data, “as long as you can play with the coefficients.” The model so obtained will “fit” the data used to construct it — duh — but may fail to predict new data. Of course, his math teacher later became the Unabomber, so what the heck.

    In DOE we teach students to take the prediction equation, plug in some different values and get a prediction, then perform confirmation runs at those values to see if they match the predicted results.

    But heck, one of Ed Schrock’s rules for troubleshooting was, when you thought you had the cause in hand, to turn the problem on and off a couple times to ensure you had a genuine cause.

  26. FAH

    @YeOlde: Concur entirely about “fitting data.” I understand there are a number of problems associated with any effort to represent variation within and among a set of dependent variable data via some set of hopefully independent variables. Also aware that a variety of techniques are available to automate the process, to test results on subsets of the data, etc. Increasing the number of variables to coincide more and more closely with the number of data points of course always will produce a perfect fit, albeit an overfit. The example was deliberately exaggerated to illustrate the essential question, namely is there any basis within formal statistics for relating the inferential power of a calculation (such as a regression) based on the number of previous calculation attempts, using either different methods or different sets of variables, to do so with the same data set. The background idea is that the more one has to look for a relation, i.e. “torture the data”, the less prone to believe the results (i.e. “the confession”) one should be. This would apply to trying different models, different types of regression, deleting “outliers”, etc. It seemed to be an analog for the observer-experiment interaction that has been a pivotal part of quantum mechanics for decades.

  27. Ye Olde Statistician

    A good example is the Tychonic model of the world, which used all sorts of factors, like the old Ptolemaic epicycles, to make excellent predictions of the heavens. It was mathematically equivalent to the Copernican model (which also used epicycles). But it certainly illustrates how a wrong model may still “fit the data.”

  28. Christopher Steven Day

    “Galileo’s Middle Finger” is a book by Alice Dreger appearing next month in paperback that is about the unfortunate conflicts that arise between people of good will when a subset of intellectuals believe that they can commit to the egalitarian norms of social justice on philosophical grounds but nevertheless maintain an unprejudiced curiosity and pursue truth on empirical grounds, following the data wherever they may lead.

    The author’s own involvement in a controversy concerning some motivations behind transgenderism, specifically male to female, provides the personal impetus for the book, but it treats far more than that. It points out numerous cases where scientific research is unduly politicized, often unfortunately in service of good reasons, namely in order to stand up for the little guy, but which tragically wind up stifling efforts to understand. And in some cases this politicization results in bullying, defamation, and death threats against scientists themselves.

    I have had my own share of run-ins with very smart people motivated by egalitarian norms of social justice, most notably with proponents of strong feminism, and I can say without equivocation that Dreger is right that the fault line really is along the question of whether it is possible to pursue truth without explicitly serving a political goal. I very recently asked two feminist intellectuals to entertain the question of whether human evolution has at all been influenced by the exercise of female choice with regard to which males they would deign to mate with. This is a normal question in biological science, but the answer I got was bewildering: suggesting that women have exercised any degree of freedom in their choice of mate ever in human history is unthinkable. Didn’t I know that every act of human reproduction up to the last 40 years was tantamount to rape? I guess being curious about sexual selection in the story of our evolution is off-limits because thinking that way can incidentally lead to arguments in favor of patriarchy. To hell with curiosity, then.

    I agree with Dreger that too often the question of what is true is subordinated to what we need to be true in order to be able to derive our preferred political ideas from the facts. And I agree with Dreger that we should work to keep science separate from politics, so that at least we in our political aspirations for a better world can be optimally informed about what we’re up against. It’s just too bad that so many on the neo-progressive Left (see: Regressive Left) have decided that the facts don’t matter anymore, since at any rate facts don’t exist, as all data are the product of interpretation and that is always subjective and therefore corruptible by political belief.

    God damn you, postmodernism.

Leave a Reply

Your email address will not be published. Required fields are marked *