Read Part I
And speaking of measurement, first a word from our sponsor, BrainView Magnetic Window3, the world’s leading manufacturer of fMRI devices, the machines which produce colorful glowing pictures of certain parts of the brain when those certain parts are “active.” And thus let us explain why the amygdala of conservatives (or was it liberals?) are more energetic than liberals (or what it conservatives?), etc., etc., etc.
I can’t stress to you just how crude these machines are. All studies which use fMRI (or PET scans, or whatever) are statistical. They show that an area of the brain is more active than other areas, but they do not show this reliably, or the people under the microscope do not always use the same exact portion of their brain when undergoing the same experimental procedure. It is also false that the rest of the brain, the parts that do not glow, are quiescent when other parts are “active.” The brain is a seething, restless, never-ceasing beast. The science of brain measurement is thus at the same level of eighteenth century Chinese medicine, where doctors would diagnose all ills by comparing the pulses of the patients’ left and right hands.4
Back to the mind: how does it arise? Gazzaniga says “emergence”, which itself is no explanation, since all it means is that the brain-body is so complex that the mind somehow arises out of it. This we already knew. What we want to know is how it does it. But Gazzaniga doesn’t know. Nobody does; not yet, anyway, and if you listen to philosophers who call themselves “mysterians”, nobody ever will5 (a point of view with which I have much sympathy, given my familiarity with over-certainty).
Gazzaniga likes to use the example that knowledge of how a car engine works tells us nothing (or a something close to epsilon) about the complexities of traffic. He quotes approvingly from Philip W. Andersen’s paper “More is different”:
The main fallacy in this kind of thinking is that the reductionist hypothesis does not by any means imply a ‘constructionist’ one: The ability to reduce everything to simple fundamental laws does not imply the ability to start from those laws and reconstruct the universe. In fact, the more the elementary particle physicists tell us about the nature of the fundamental laws, the less relevance they seem to have to the very real problems of the rest of science, much less to those of society…
The arrogance of the particle physicist and his intensive research may be behind us…but we have yet to recover from that of some molecular biologists, who seem determined to try to reduce everything about the human organism to ‘only’ chemistry, from the common cold to all mental disease to the religious instinct. Surely there are more levels of organization between DNA and quantum electrodynamics, and each level can require a whole new conceptual structure.
In other words, you can’t tell what’s going on above by knowing what’s going on below. “In neuroscience when you talk about downward causation you are suggesting that a mental state affects a physical state. You are suggesting that a thought at Macro A level can affect the neurons on the Micro B level.” Even though he says this, this isn’t exactly what Gazzaniga is suggesting: he chooses to use the word “complementarity” in just the same sense as Bohr used it, but between the mind and brain-body (he’s dreadfully afraid of opening a door for dualism). But I don’t see the difference. By saying the mind—which arises out of the brain-body; it is its form—causes changes, we are not tossing out causation: we are saying the mind itself causes changes in the brain-body; that is, changes in itself. Placebo effect anybody?
Gazzaniga then plays around with the idea of a “social mind” saying that minds cannot exist in isolation and need others. Or something. And that “the environment and the organism are coupled across time.” And that universal, more or less, behaviors and moral attitudes (e.g., to incest) are found. Regarding the social mind, I think either the logic collapses on itself or reduces to Aristotle’s truism, so I do not explore these ideas further.
Given that alternate theories exist, why should people be so anxious to deny free will? Gazzaniga tackles that in the last section of his book.
There is more than a smack of savioritis among neurologists who bring us the gospel of determinism (or perhaps enfant terribilism is more apt). “You don’t have to be free!” they tell us, “Embrace determinism and whatever you do is not your fault.”
The fallacy, if it isn’t already, is made obvious with this anecdote where John Maynard Keynes observes the behavior of Bertrand Russell:6
Bertie in particular sustained simultaneously a pair of opinions ludicrously incompatible. He held that in fact human affairs were carried on after a most irrational fashion, but that the remedy was quite simple and easy, since all we had to do was to carry them on rationally.
As David Stove commented, “Just two effortless sentences, and yet how fatal they are to any belief in Russell’s political wisdom, or even sense! They are like a bayonet thrust through the heart and out the back.” Why, the trouble with the world is that people do not know they are not free; therefore, to fix the world, we just need to act like we are not free. Sheesh. It really is no better than this. However, we are no longer talking philosophy, but psychology.
The first suggestion on the minds of the hard determinists is that we should “change the law”, especially punishment, seeing that the guilty had no choice but commit their crimes. This ends their argument. And in so ending, it ought to tell you the why. For if the guilty had no choice, the judge has no choice but to punish. What makes the judge freer to change his behavior than the criminal? His position of authority does not after all allow him escape from the “causal chain gang.”
Even Gazzaniga, no determinist as we have seen, allows himself to imagine that “neuroscience has an enormous amount to say about the goings on” inside the courtroom.
It can provide evidence that there is unconscious bias in the judge, jury, prosecutors and defense attorneys, tell us about the reliability of memory and perception with implications for eyewitness testimony, inform us about the reliability of lie detecting, and is now being asked to determine the presence of diminished responsibility in a defendant, predict future behavior, and determine who will respond to what type of treatment. It can even tell us about our motivations for punishment.
The last claim is false, and the others are dicey. But we shouldn’t begrudge a little speculative fiction from a man obviously in love with his job. As long as these ideas are not taken too seriously, they can even be fun. Trouble begins when are taken seriously. We can all be grateful, for example, that courts have still resisted the idea of using “scientific” lie detectors.
Gazzaniga recognizes this, and says the following about using “brain scans” to identify “abnormal brains” in the courtroom:
There are other problems with the abnormal brain story, but the biggest one is that the law makes a false assumption. It does not follow that a person with an abnormal brain scan has abnormal behavior, nor is a person with an abnormal brain automatically incapable of responsible behavior. Responsibility is not located in the brain. The brain has no area or network for responsibility.
He reminds us of the case of John Hinckley who was found “not guilty” because of insanity. Yet his murder attempt “was premeditated. He had planned it in advance, showing evidence of good executive functioning. He understood it was against the law and concealed his weapon. He knew that shooting the president would give him notoriety.”
We started with Dr Johnson and we end with him, at least as far as demonstrating the existence of free will. We see that we have it because we use it, therefore it exists. Nevertheless, the fear is that we are only at the start of the neurological predestination fad. Gazzaniga’s book provides a sober check to the excesses we find elsewhere.
3I made it up.
4Doctors, all men, would not even be allowed to view female patients of the noble. These ladies would sit inside an enclosure and only stick their wrists out. Gazzaniga explicits that statistical nature of measurement himsel, and warns against using “averages” of images to infer facts about individuals; see p. 197. Whenever you see an fMRI study use the word “associated” when describing regions and behavior, you know the measurement is uncertain.
5For an explanation of this viewpoint, see Colin McGinn’s The Mysterious Flame: Conscious Minds In A Material World. Also see Feser. McGinn is arguing that philosophers should be renamed to “onticists”. He doesn’t like that in calling himself a philosopher, he is expected to know the answer to all of life’s mysteries.
6I learned of it from David Stove’s On Enlightenment, though I have since seen it elsewhere.
I’ll be away from the computer until Thursday.
Categories: Book review, Philosophy, Statistics
RE brain scans in the courtroom —
Junk science, or not? I don’t know the specifics of the case cited…but probably junk science.
We now know with certainty that with use & disuse parts of the brain “rewire” themselves accordingly. It’s called NEUROPLASTICITY. This is routinely measured. A longitudinal study of London cabbies showed how the parts of thier brains associated with spatial relationships (maps & knowing routes) measurably changed over the two year training & certification program.
That’s one, of many, examples of behavior directly altering brain function sufficiently to show up on various brain scans. There’s a few decent books for the layperson addressing this (e.g. ‘The Brain That Changes Itself,’ by Doidge; http://www.amazon.com/Brain-That-Changes-Itself-Frontiers/dp/067003830X — describes how extreme strokes resulting in significant loss of brain tissue can, in many [not all] cases, be overcome to the point of little or now outward effects due to “rewiring” — which requires conscious effort & dedicated work)
Which puts into serious question the early legal argument that the persons’ brain, because it is “different” is proof positive that the person is thus a hostage to an abberant brain chemistry. The brain may be showing abberant “wiring” because the person engaged in certain repetitive behaviors, thoughts, etc. A sort of which came first, chicken/egg argument turned around.
This is strong rebuttal, by the way, to the argument (prior blog essay) about biological determinism & the lack of “free will” — if biology truly ruled & free will was an illusion, then why/how can choices leading to entirely new habits physically alter the biologic structure???
The first suggestion on the minds of the hard determinists is that we should â€œchange the lawâ€, especially punishment, seeing that the guilty had no choice but commit their crimes.
Well that’s not very bright thinking. You remove the bad apples from the barrel before they can completely ruin it. It doesn’t matter they can’t help being bad. Note that John Hinckley was incarcerated even though he was found “not guilty”. All that finding did was alter the location of his incarceration and removed the death penalty from the table.
We see that we have [free will] because we use it, therefore it exists.
That is utter nonsense and you know it. You claim “free will” merely because you think you have it. If you merely mean “capable of deciding” then OK but why do you insist on the “free” part? You have no clue how you arrive at decisions any more than you know how you make your arms move. There’s a danger with self-retrospection. Look at the gobbledygook Freud came up with (not that DSM-IV is any better).
if biology truly ruled & free will was an illusion, then why/how can choices leading to entirely new habits physically alter the biologic structure???
Who said one rules the other? It’s a combined system. Apply pressure long enough and your teeth will change position. I had nerve damage to my jaw that repaired itself (sorta) apparently through rerouting. It took no effort on my part. Maybe those new habits just establish conducive conditions for whatever occurs?
That’s strange maybe an italics terminator will help.You’d think the blog software would do that automatically.
Didn’t work either. was there more than one? or are these being removed. Picked a rather bad day for this with Briggs out of the office and all.
I think the blog software has free will with respect to italics.
While I am, in principle, as supportive of the efforts to find how the mind arises from the brain as I am of the efforts to develop artificial intelligence, in my view it’s more the journey than the destination that will reveal value, as the destination will always recede to the vanishing point.
Years ago, they thought that machines would be intelligent once they could play chess. Nope. Is IBM’s “Watson” intelligent? Nope. But we learned a lot about intelligence, and other things, as part of the quest.
The fundamental mistake, in my view, is in assuming that mind is caused by the brain, which is, in a crude way, equivalent to assuming that the algorithm is caused by the computer — no matter how much you study the computer hardware and the properties of the underlying electronics, you will learn little about the algorithm, particularly if you don’t fully understand the problem that the algorithm is solving.
Undiscovered (and, thus, not-yet-instantiated) algorithms must exist. To see why I believe this, just try asserting the opposite: No undiscovered algorithms exist — therefore all possible algorithms have been discovered. But precisely where is it that these nascent algorithms exist, that is accessible by mind but not by machine? True, there are now algorithms to discover algorithms, after a fashion, but they won’t discover solutions to as-yet unknown problems (except perhaps in a trivial way, by working backwards from a solution to a problem).
As a scientific sort of person, I justify (rightly or wrongly) my belief in “mind as a distinct phase of reality” by insisting that science is the study of physical phenomena, and matters of mind (and spirit) are non-physical — still subject to reason and logic, but necessarily based on certain axioms (as are reason and logic themselves). Choice of a set of axioms is a matter of personal taste — everyone accepts some axioms. Absolute knowledge can be attained, but it requires eternity to do so.
None of this helps me decide what to have for lunch. So many choices…
Uncle Mike: There are really only 3 choices for lunch, so it’s simple: you can eat items of food, non-food items, or forgo lunch.
Since consuming non-food items probably would be classified as a form of psychosis and not lunch, you really only have 2 choices.
Yet, to forgo lunch is to not have lunch, therefore you will have food when you have lunch.
See, your future is predetermined.
I’m sorry, really sorry but I keep reading, “Gazzinga”. Blame Sheldon Cooper.
A “mind” is what we individually refer to as “I”. It is more likely a sum total of every system and their interactions within the body as opposed to residing only in the brain.
An “algorithm” is just a description of a procedure for doing something. At the current design level, a computer has to be told how to proceed. That doesn’t mean in the future they will continue to need such close guidance.
Well, I did allow that the analogy was crude, but I do understand what an algorithm is.
The intent was twofold: to illustrate that the hardware and the software are different, and to illustrate that there is an abstract algorithm, with its own, very real, existence, that is not part of the hardware, and only *instantiated* in the software. The computer program is not the algorithm: many different computer programs will effect the same algorithm, and many more will produce the same output with a given input, but use different algorithms.
More specifically that I don’t believe that any amount of study of a person’s brain will retrieve or be able to reconstruct, say, their emotional reaction to a particular memory or event, nor predict whether they will choose a tuna salad or a hamburger for lunch tomorrow.
Furthermore, there is no physical procedure that could have been performed on a baby Shakespeare to get him to write “Hamlet”.
To suggest that what we now know as the play “Hamlet” is a statistical fluke, a mere coincidence that some chemical process brought together the required symbols in the required order, is, to me, preposterous. To write it, Shakespeare *must* have recognized the existence of, and been able to estimate the capabilities of, *other minds* that were not in his immediate presence (either spatially or temporally). He must have formed the desire to communicate to those minds, and chosen the medium and how to manipulate it. Surely it is evident that there is a discernible pattern, to the extent that it can be quantified, in the reaction of the members of different audiences to different performances of “Hamlet”. After all, two members of different audiences can still discuss the play itself, and will have a common frame of reference. It has not been demonstrated with any degree of rigor just how it is that chemical reactions can have such action-at-a-distance.
Of course, there has been a recent trend to hand-waving about quantum this and that, as if our inability to understand temporal and non-temporal aspects of phenomena at the nuclear level explains something, but to me it’s just another turtle on whose back the previous turtle rests.
Mind can’t be “explained” by simple unpredictability, or we would view white noise as intelligence.
The algorithm and the computer are separate things but not when joined. Then the computer becomes whatever machine is dictated by the code within the limits of the machine. Wood and bricks are separate things but become a house when joined properly.
I doubt there’s only a single process to be discovered. There are many things all happening at once. It’s a lot like trying to pick out a single conversation in a large crowd of people. And that’s assuming it would be that easy. It could very well turn out that it’s the combination of all of the conversations that should be sought instead. Whatever that means.
We can already pretty much judge a person’s emotional reactions without resorting to brain scans. The same for predicting them. There’s a game out that uses electromagnetic output from head sensors to control the position of a ball. Crude, but evidence that maybe more can be accomplished.
As yet, though, the current attempts with brain do suffer from way too much confidence in their conclusions.
NSA did a (probably ongoing) study back in the days to attempt to decipher the code, inputs and outputs of a running computer program by listening to the resulting electromagnetic spectrum. I don’t know how successful they were but do note they issued requirements for containing radiation from computers.
Just because the current attempts using brain scans amount to bumbling is not a reason to dismiss scans as a future source.
Antonio Damasio has many interesting things to say about the mind.
In my own opinion we can reconcile free will with living in an indeterminate, yet causally determined, universe in the following way:
The mind has a very large abstract/conceptual vocabulary with which to describe and understand the universe — while it has a very limited emotive vocabulary to assign “value” to it’s abstractions. Because of this, we can sometimes make connections that are not necessarily warranted by logic, i.e. connections based upon emotions. Sometimes these emotive based connections pay off with genuine insight, and other times (in fact most times) they do not. Think of it as concepts are “digital”, while emotions are “analog”.
The ability to make novel connections between disparate abstractions forms the basis of the human mind.
If a so called coincidence would happen than everything after that would be a coincidence.
I don’t believe in coincidence.
I don’t want to belabor the point, but must apologize that I’ve not been clear.
The computer hardware and the *program* (not the algorithm) may or may not become indistinguishable when joined. But that is largely orthogonal to my argument.
My point is simply that the algorithm exists, whether or not there is a computer running a program that instantiates the algorithm. Maybe another example would be a mathematical proof: I assert that Perelman’s proof of the Poincare conjecture exists. But where is it? Is the essence of its existence simply the symbols on a piece of paper (or a computer screen), rendering it dependent on the paper or the screen for its existence? Does the selection of a different set of symbols (say, translating it to Japanese) render a different proof? If someone who had never been exposed to Perelman’s work proved, from first principles, the Poincare conjecture using Ricci flows exactly as Perelman did, would it be a different proof or the same one?
In considering those questions, I conclude that the proof exists and is a real “thing”, yet its only physical component rests in its symbolic representation, which is independent of the proof itself. Since it is a real thing without physical form, there must be a phase or dimension of reality (defined as the universe of all things real) that is distinct from physical reality. (i.e. people who understand the proof are merely observers of the real thing).
So the question then becomes: did Perelman’s brain “cause” the proof, which I think is a question that is logically equivalent to “did Perelman’s brain cause his mind?” In my view, the answer is “no”.
P.S. Sorry, but I don’t have the time, nor do I wish to impose on Dr. Briggs’ hospitality here, to fill in all the gaps — the above is only a rough outline of the reasoning. The details are left as an exercise to the reader 😉
“….yet its only physical component rests in its symbolic representation, which is independent of the proof itself. Since it is a real thing without physical form”
To believe that we can have knowledge of a “proof” without symbolic representation is a fallacy of reification. “Symbolic representation” (meaning words, mathematics) are the perceptual form of the method of knowing. Thought is an action of an existent (the mind) and thought is every bit as existential as it’s subject. Thought is not disembodied.
The idea that thought inhabits some mystical realm has cause no end of grief.
The will is an appetite or desire for products of the intellect. (Similar to the emotions, or “sensitive appetites,” which are desires for the products of perception.)
As such, and contra Nietzsche, the intellect is prior to the will. We cannot desire something we do not know.
If our knowledge is imperfect, our will is unconstrained to any one outcome. Example: world peace. Unless we know exactly what this “looks like” and how it must be attained, the will is free of any determination to any particular means. OTOH, where knowledge is perfect, as that 1+1=2 in normal arithmetic, the will is completely determined toward one end and cannot withhold its consent.
Hence, the freedom of the will is a direct consequence of the imperfection of knowledge.
There is no end to it. I looked in a book (Bertrand Russell’s A History of Western Philosophy) and I see this has been a hot topic ever since the Greeks at least. There were big questions like do atoms have free will, does God have free will, and not much to give me hope that philosophy could ever hope to resolve this debate.
I don’t find it surprising that ordinary people wish to hold on to beliefs in determinism. I would find it surprising if statisticians or quantum-mechanic physicists did not believe in randomness (at some level, for example electron orbits have become electron orbitals)
I would like to see where “Bertie” said that carrying on rationally was “quite simple and easy”.
Here is his conclusion to the mentioned book:
Bertie was an optimist 😉
In discussing what is meant by “liberum arbitrium” it would be well to consider what was actually meant by the people who coined the term. We cannot be too surprised if Cartesian and other modernist gimcrackeries prove to be incoherent. After all, one way to “disprove” something is to create an elaborate and unrealistic version of it.