It helps every time you hear “AI” to think “statistical model”. Because, of course, that’s what AI is. Statistical models layered with clever data processing, that is. Curve fitting.
Saying “statistical model” is bound to put any audience, except that composed of the toughest nerds, fast asleep. You can’t terrorize or enthrall anybody about our “Coming Statistical Model Future”.
Imagine saying “Statistical models will bring many wonders. It may also destabilize everything from nuclear détente to human friendships” and thinking you’d hear anything except snores or giggles in return.
How about this?
Humanity is at the edge of a revolution driven by statistical modeling. It has the potential to be one of the most significant and far-reaching revolutions in history, yet it has developed out of disparate efforts to solve specific practical problems rather than a comprehensive plan. Ironically, the ultimate effect of this case-by-case problem solving may be the transformation of human reasoning and decision making.
This revolution is unstoppable. Attempts to halt it would cede the future to that element of humanity more courageous in facing the implications of its own inventiveness. Instead, we should accept that statistical modeling is bound to become increasingly sophisticated and ubiquitous, and ask ourselves: How will its evolution affect human perception, cognition, and interaction? What will be its impact on our culture and, in the end, our history?
Class A soporific. No one will panic hearing it.
Make it “AI” instead and sweat pops out on brows. The imagination flares. Computers that can learn are going to take over the universe!
Such, anyway, is Henry Kissinger’s worshipful attitude. And Eric Schmidt’s and Daniel Huttenlocher’s.
Just listen to the way these guys talk. Guys you’d think would know better (my emphasis).
…developers of AlphaZero published their explanation of the process by which the program mastered chess—a process, it turns out, that ignored human chess strategies developed over centuries and classic games from the past. Having been taught the rules of the game, AlphaZero trained itself entirely by self-play and, in less than 24 hours, became the best chess player in the world.
No it didn’t.
This glorified calculator instead fit some curves, the nature of which were part of its input from human beings, the fitting of such also part of the input from human beings.
The statistical model, i.e. the fitted curves, forecasted certain outcomes conditional on different game states, and it turns out some of those forecasts were better than some made by people.
Well, this is no surprise. Calculators have long been faster than men at adding and subtracting, just as bikes can beat men in a race. We don’t generally fear bikes. (Unless you live in NYC.)
Chess has trivial rules. Trivial. It takes no genius to encode them. Why shouldn’t fast calculators (computers) beat slow ones (men) at this simple game?
Another game with trivial rules is Texas Hold ’em (thanks to Victor Domin for the tip). The odds of getting a card are easily fixed knowing what is on the table. The odds of whether a person is bluffing are only slightly harder. Part of those odds are conditional on the bet sizes that came before, the size of the current bet, the remaining money of the players, and past performance of those players—in a mathematical way that has to be guessed by a man.
It doesn’t take much to guess. But it takes a lot to calculate once the guess has been made. And, lo, statistical models are now beating human players.
The amount of creative input in the modeling is minimal, in the programmer saying “I think the way past bet sizes influence the odds of hands in this mathematical way.” Maybe the guess is wrong at first, but some honing gets it in the ballpark.
Chess involves memorizing lots of things, and statistical models are better at that than humans. Poker is probably less memory-intensive, and if it’s anywhere humans will figure a way to beat a computer, once they learn the computer’s way of betting, it’s with this game.
But these kinds of tasks are, quite literally, child’s play. Less than child’s play. Consider instead of the question “Given the status of the board now, which piece should I move where?” this question: “How many times should President Trump visit Korea?”
Kissinger et al. think it easy, though:
Hardly any of these strategic verities [such as nuclear deterrence] can be applied to a world in which AI plays a significant role in national security. If AI develops new weapons, strategies, and tactics by simulation and other clandestine methods, control becomes elusive, if not impossible. The premises of arms control based on disclosure will alter: Adversaries’ ignorance of AI-developed configurations will become a strategic advantage—an advantage that would be sacrificed at a negotiating table where transparency as to capabilities is a prerequisite. The opacity (and also the speed) of the cyberworld may overwhelm current planning models…
More pointed—and potentially more worrisome—issues loom. Does the existence of weapons of unknowable potency increase or decrease the likelihood of future conflict?
Cyber warfare is not, I think, what they mean here. Securing access to calculators that run important things is important. (Usually easily done by unplugging the ethernet cable. In computer-boogey-man movies, no one ever thinks of shutting the power off at the substation.) The computer can only do what its told, and if somebody codes “Build a weapon like this”, it can do it. In bits. Building that weapon in steel still has to be done, though, and at that point we’re back at the regular arm’s race.
There are two real threats from statistical models, and one of them is not computers “learning”. The first is storage. You heard me: storage. Schmidt must have wrote this:
Google Home and Amazon’s Alexa are digital assistants already installed in millions of homes and designed for daily conversation: They answer queries and offer advice that, especially to children, may seem intelligent, even wise. And they can become a solution to the abiding loneliness of the elderly, many of whom interact with these devices as friends.
The more data AI gathers and analyzes, the more precise it becomes, so devices such as these will learn their owners’ preferences and take them into account in shaping their answers. And as they get “smarter,” they will become more intimate companions. As a result, AI could induce humans to feel toward it emotions it is incapable of reciprocating.
To a small extent, this is true. But statistical modeling will top out. Unless they is continuously augmented by human input. Real Turing tests can only be passed in trivial situations, or by those who want them to be passed.
Anyway, there’s a threat, all right. Google and Amazon employees listening to your conversations, storing them, and reporting them to authorities. Which we know they already do. Call this AI if you like, but self-bugging is a better term.
Governments and companies already store your cell phone data, which tells where you were and what you were doing for most of the day. We already know about what happens on line. Facial recognition (more curve fitting) is error prone, but storage is easy.
There will be nowhere to hide. Unless you give up your toys.
The second threat is spiritual.
AI will make fundamental positive contributions in vital areas such as health, safety, and longevity.
Still, there remain areas of worrisome impact: in diminished inquisitiveness as humans entrust AI with an increasing share of the quest for knowledge; in diminished trust via inauthentic news and videos; in the new possibilities it opens for terrorism; in weakened democratic systems due to AI manipulation; and perhaps in a reduction of opportunities for human work due to automation.
This is overwrought, except for the bit about deep fakes. You won’t be able to trust what you see. As long as people remembered that, it’s an improvement. They won’t.
The other thing is that people already believe its the computers doing the thinking, not men. No computer has ever thought like a man, and no computer ever will. “I should do this, because AI told me to” is spiritual death.
Chess and poker can be programmed because the rules are exact or approximately so. Knowing how many times to visit Korea can’t be, because there are no fixed rules. Somebody can make some up, sure, and “optimize” decisions based on these made up rules. But unless those rules exactly match reality, which they won’t, decisions will likely be worse, not better.
Meanwhile, people will come to rely on the “expertise” of statistical models too much.
To support this site using credit card or PayPal click here
See youtube videos on the difference between machine learning and deep learning. All nested in ai.
From just three or four of those relatively short videos you can get a picture of what’s going on.
That machines identify patterns is one of the ways our own brians work. Deciding if the patterns are meaningful is another level. As far as I understand, rather like with the brain, with the neural networks it is impossible to know everything that’s going on.
Furthermore I believe that some of the trickier problems in medicine having to do with brain function will; be aided and assisted by soe fo the apparently separate research into ai.
Just as vetenary medicine helps human md=edicine and vice versa.
Psychieyry and psychology might benefit.
I’m almost sure, that pain science will.
There’s just no need to be so negative about ai. You don’t have to love sci fi or the idea of big brother.
“yet it has developed out of disparate efforts to solve specific practical problems rather than a comprehensive plan” It developed because it made money and people are inherently lazy. They’d teach chimpanzees to launch spacecraft if they could.
“This glorified calculator instead fit some curves, the nature of which were part of its input from human beings, the fitting of such also part of the input from human beings”
Which makes me wonder, how do we get AI from creatures that lack fundamental intelligence in the first place?
(Usually easily done by unplugging the ethernet cable. In computer-boogey-man movies, no one ever thinks of shutting the power off at the substation.)
Fictional character Gibbs on NCIS is always pulling power cords on machines that are uncooperative. I’ve unplugged my computer mid-process, in spite of the dire warnings not to (designed, of course, to help virus people infect your computer). My printer is very familiar with having the cord pulled from the outlet. Of course, I tend to go old school on many things since that WORKED. Practicality all the way. (Just an aside, my husband and I were talking about having more powerful computing in a cell phone than the computers that originally launched the moon landing. Yes, but far, far less reliability than those computers. You increase power, you lose reliability. As you noted, you add far too many factors and the accuracy and reliability are lost. That’s what stupid “smart phones” represent. Plus, they make great electronic Soma. A zoned-out, eyeball-drugged society makes wonderful slaves.)
“And they can become a solution to the abiding loneliness of the elderly, many of whom interact with these devices as friends.” Heck, people with PTSD already prefer dogs to their own families. Why not a disk-shaped talking box? We shall yet learn to ignore or outright hate humans and love dogs and machines. Then the eugenics and wars won’t be a problem.
I started my PhD in AI. When I saw that AI was a fraud, I asked to be moved to the Software Development PhD program. And never looked back.
It is not that AI is not useful
It is that rate between hype and reality is 1 billion to 1. There is no artificial intelligence at all in AI, but some algorithms developed by people that have good natural intelligence. And don’t get me started about the periodical claims that the press does about AI dominating humans. Decades goes by and the same claims get recycled. In the 60s, the claims were that we would have computers smarter than humans in the 80s, for example.
I think some of the hype of AI is due to the fact that is linked to philosophical materialism. If the mind was only the brain, every physical process in the brain could be simulated in a computer and therefore intelligence could be simulated. As one of my PhD professors said: “We know that the intelligence is implemented in the brain in form of neural networks because it cannot be in another way” (meaning: you can’t be one of such medieval rednecks that think that spirit cannot be totally reduced to matter. We are scientists, for God’s sake, and don’t believe in witches). But materialism is not true, see the philosophical problem of qualia or Searle’s Chinese room or several arguments in Ed Feser’s blog. Admitting that AI won’t work would admitting that materialism is false and they can have that
I meant: “They CAN’T have that”
Modern AI is just about to the stage of being able to reproduce realistic behaviors mimicking a solitary cockroach. Ants are still out of reach, and bees are over some far horizon.
Mathematical modeling can’t yet outperform slime molds in some tasks.
The inventors of neural networks have abandoned it as a path to better AI. They have started over from scratch, acknowledging that neural networking is a dead end.
The real problem with neural networks isn’t their inherent limitations. It’s the fact that you never actually know what you’ve really taught them.
Echoing your article today is an article in CFACT:
That’s why in my near-future Firestar series they were called Artificial Stupids.
That a simulation produces the same output as reality does not imply that the internal structure of the simulation matches the real world.
“Calculators have long been faster than men at adding and subtracting”
And how. In the 1970s I used to design filters. Filter design is an arithmetic intensive process, AKA drudgery, so I wrote computer programs to do the calculations. One program had about 2000 lines of Fortran and took about 2 minutes to calculate the filter design. Doing the calculations by hand would take me a week. I didn’t consider the computer intelligent because it could do the calculations a lot faster than I could and it didn’t make mistakes in the arithmetic.
“There will be nowhere to hide”
Why wold somebody want to hide? Hmm.
Is it any different that some creep at google looks you up on the internet as opposed to some random intellectual surfer who is really just trying to get women to “whisper into a microphone”? “whisper!” More hiding, more lying. More deceit incompatible with honour.
“The other thing is that people already believe its the computers doing the thinking, not men. “
No they don’t. Well, some people think the moon is made of cheese so maybe your point stands.
“No computer has ever thought like a man, and no computer ever will.”
Does that matter? What does seem to happen in the brain is comparable with some of the computation on a machine. No point denying that.
“I should do this, because AI told me to” is spiritual death.”
I should do this because (insert bad source) told me to, is just foolish.
Spiritual death comes when you fail to recognise it when it happens. Some around here are already dead, spiritually, so they require a magic trick once a week to replay the algorithm when they dane to show up at mass.
(~~“The other guy’s got it all worked out, we don’t have to.” ~~) is absolutely the same cultish mentality. Aka fanaticism, dangerous, particularly when coupled with a brain.
Thank you, imnobody00 for your comments! But of course you are somebody with God.
When the body can no longer support life, the spirit leaves the body, and this we call death. We are spirit, soul, and body, according to Scripture, and therefore our minds are not material, although we use the brain to express our thoughts and to think with in connection with our spirit. This is something that AI cannot replicate.
As the article stated, it can only do as programmed information is given, faster, speedier, broader than perhaps the average person, but it is but a mechanical technological creation to which many people, it seems, are giving up their independence. What a shame, literally.
I believe that if the Catholic Church exercised the gifts given it by the Holy Spirit for healing, among other gifts, as Jesus did, if those in high office there, fulfilled all of their gifts given them, many, many people would be in health, praising God. Read Acts.
Sort of left AI behind, there, didn’t I?? Well, thank you for the article….so much to think on and pray on.
God bless, C-Marie
Having been taught the rules of the game, AlphaZero trained itself entirely by self-play and, in less than 24 hours, became the best chess player in the world.
No it didn’t.
This glorified calculator instead fit some curves, the nature of which were part of its input from human beings, the fitting of such also part of the input from human beings.
Uhhh, no. First of all it was trained using generated games from 5,000 first-generation TPUs — the so-called “self play.” Those may have been trained using human input though it’s not clear if they were. Secondly, chess is not a probabilistic game like poker. You can see all of your opponents moves which you counter while following your own agenda. Thirdly, to be really good, the program would have to respond to a specific line of play from each opponent.
Yes, chess can be trivialized by deep search and opening book memorization a la DeepBlue. But that’s a kind of cheating and isn’t what AlphaZero does.
If AlphaZero was merely curve fitting, it’s play would be the average of those 5000 TPU generated games and its best play only could be as good as the very best of the generators. Put another way, if they were all average players, AlphaZero would operate as an average player never rising above mediocre play. Somehow though, its play exceeds those of the training generators and is capable of fending off attacks by an opponent while following its own agenda.
As Gary Kasparov (likely the best chess player in history) has said:
Grandmaster Matthew Sadler after analyzing AlphaZero’s playing, says
How did AlphaZero manage that by mere curve fitting? Is it possible that what AlphaZero does is close to (or at least similar to) what the best human chess players do?
This is yet another of your attempts to trivialize the accomplishments of AI by alluding to the simple nature of the components; completely ignoring that it’s the specific configuration of the components that leads to the actual results. It’s a lot like saying the brain is just a bunch of simple neurons communicating via electrical impulses so how can it do anything?
The original idea behind AI was to demonstrate human thought process theories and to perform similarly. However, the term AI has been usurped over the years to mean almost anything. So, yes, AI as a general term isn’t what is normally hyped.
Modern AI is just about to the stage of being able to reproduce realistic behaviors mimicking a solitary cockroach. … The inventors of neural networks have abandoned it as a path to better AI.
Yet, AlphaZero is a neural network which far exceeds the capabilities of a cockroach.
The real problem with neural networks isn’t their inherent limitations. It’s the fact that you never actually know what you’ve really taught them.
Strangely, the same can be said about humans.
“Unless they is continuously augmented by human input.”
An AI wouldn’t have made a typo!
But seriously, you always claim that AI is ‘just’ statistical modelling or ‘just’ curve fitting, but how do you know the human brain is doing anything qualitatively different? You’re basing your reasoning on something you _want_ to be true (that the human mind is immaterial), not on the evidence.
While it is true that cockroaches don’t play chess, it is also true that AlphaZero doesn’t scurry for a hiding spot when the light is turned on. AlphaZero also doesn’t eat paste when it’s hungry.
“threat is spiritual”
Or from the many many many instances of child sex abuse by the Catholic church?
I always find reading rants about materialism from those using a computer or phone amusing. It is like tech is bad until you get used to it,
I workedon a project years ago called Lobster Brain to build a computerized version of a lobster’s brain. Quite similar to a cockroach, actually. The problem was insufficient computer capacity to handle the needed parallel computations. The project was successful but realtime was out of the question.
Computer power hasn’t increased much even though using hundreds of TPUs is impressive. So, yeah, nearly 50 years ago, AI stopped trying to simulate actual neurons and searched for something more practical and much higher functionality is the result. If that’s what you meant, you’re way behind the curve.
It seems like you’re urging a false dichotomy. Can’t there be more than one spiritual threat? And it’s far from clear how the wickedness of certain priests and bishops shows that the Catholic Church is wicked when (1) the Church includes other priests and bishops besides, to say nothing of laymen, and (2) the Church is not constituted by her members anyway. But since this is completely off topic, I suggest we leave it at that.
It’s also hard to see the self-contradiction in non-materialists’ use of material things to pursue their ends. Is it your view that science has shown that the mind is material so that the non-materialist must be anti-science and therefore inconsistent by relying on science to create working computers?
Is it your view that science has shown that the mind is material so that the non-materialist must be anti-science and therefore inconsistent by relying on science to create working computers?
The non-materialist view is not so much anti-science as it is the least likely explanation.
Nonphysical/physical interactions are nowhere else to be found. It’s also a dead-end where deeper understanding is perforce negated. A lot like saying thunder is Thor pounding his hammer. Not much of an explanation.
Interesting response. A couple of things stand out to me. First, you say that the non-materialist view is the least likely explanation, but you don’t specify just what facts need to be explained. So that makes it sound like the non-materialist view is just unlikely regardless of the facts (though I recognize that need not represent your opinion on the subject). The other thing that stands out is that the reasons you give for thinking the non-materialist unlikely are both philosophical rather than scientific. Thus, I’m still in the dark why there should be any irony in non-materialists using computers.
you say that the non-materialist view is the least likely explanation, but you don’t specify just what facts need to be explained.
Can’t help bu notice you didn’t either. You get to argue against what I say while not showing your hand — a shoddy trick. What are YOUR reasons for thinking the mind is non-physical. IOW, what’s YOUR evidence?
I would think we were talking about a non-physical entity, presumably the mind, interacting with a physical one, the brain. What evidence do you have this might be true? What are YOUR facts?
The following are some of the facts presented as questions:
makes it sound like the non-materialist view is just unlikely regardless of the facts
Well, yes, if only from the supposed non-physical/physical interaction. Where else can such a fantastic thing be found? What are the “facts”?
the reasons you give for thinking the non-materialist unlikely are both philosophical rather than scientific
So, to you it is a philosophical question unconnected to reality? That is, devoid of physical evidence? Again, what are you using for evidence? I have yet to see any convincing argument for a non-physical mind. Give it a whirl. Perhaps you will be the first.
You are confusing general discussion with philosophical. As for it being scientific or not, what exactly do you think that is?
You could take what I wrote as a criticism of what you said, but it’s better to see it as a clarification, to avoid giving anyone the impression that a material mind is a theorem of physical science rather than a thesis of a certain philosophy which is often felt to be congruent with science. The difference is crucial for my purposes, since I was trying to prod Justin into showing how it is inconsistent for someone to claim that the mind is immaterial while still taking advantage of modern technology.
To defend the immateriality of the mind in detail would be a quite an undertaking, so I can only sketch the case here. The strategy is to identify something that we are able to do that would be inexplicable if our minds were material. One line goes something like this: (1) When we apply our minds to “plus” two numbers together, there is a fact of the matter about whether we are truly executing “plus” or some other operation such as Kripke’s “quus”; (2) of no material object that seems to execute “plus” is it the case that there is a fact of the matter whether it is truly executing “plus” rather than “quus” or some other operation; (3) therefore, our minds are not material objects. The second premise is just an application of the observation that physical facts do not outright imply their explanations. The first can be defended by noting the radical form of skepticism that would be required to defeat it. But for details, I can only point out links where a professional lays out the argument (http://edwardfeser.blogspot.com/2018/08/the-immateriality-of-mind.html) and a related one closely connected to the question of AI (http://edwardfeser.blogspot.com/2019/03/artificial-intelligence-and-magical.html).
To answer some of the difficulties, I agree that precisely how an immaterial mind can be related to, or affect, a physical body seems mysterious. One thing to guard against is the thought that the mind is an efficient cause of the body’s actions. That can’t be right since, for example, it would violate conservation laws. I think the traditional way of thinking about it is to say that the mind determines what the body does by setting ends for it. This would be sort of like the rider of a donkey dangling a carrot in front of it, so that the animal does all the work while the rider decides where they go. Or you could think of the mind as a playwright who is responsible for the play even though the actors are all that the audience sees.
Questions about how the mind relates to the brain and related matters are also given a satisfactory answer (I think) in the traditional metaphysics that says the soul is the form of the body and that the two together are a unity. But now we’re getting severely off topic.
Assumes facts not in evidence.
What do you believe thoughts are? Why do they seem to require a brain? If they originate from some nonmaterial mind what is the mechanism for their interaction? What is special about the brain that it alone can interact with the metaphysical?
Feser, et al.
Philosophers spend a lot of time classifying but provide little in the way of answers. Much like a zoologist classifying butterflies but not getting any closer to explaing life.
If you are into that sort of thing, try reading Hofstadter. The Mind’s I and Goedel, Escher, Bach.
I agree that precisely how an immaterial mind can be related to, or affect, a physical body seems mysterious.
To say the least. So far you’ve avoided it.
traditional metaphysics that says the soul is the form of the body and that the two together are a unity.
You do realize that’s a meaningless statement, yes?
Every tree hugging guru gives that answer as if it IS an answer. The only thing missing is how Gaia fits in.
“Assumes facts not in evidence.”–What? Do you mean it’s unclear that we’re actually doing addition when we think we’re adding numbers together, or do you mean that physical facts actually imply their explanations despite the obvious counterexamples, or do you mean that there are more facts about material objects than just physical facts? Those are the only ways I can see to get out of the premises. Are there other ways you can see?
I don’t think the line of questioning about the brain really gets at what matters here. To see why, imagine someone who insisted that playwrights don’t exist, and then imagine him asking your questions about the brain in terms of plays (=thoughts), actors (=the brain), and a mysterious “non-actor” (=the mind). The result should strike you as odd.
Anyway, thanks for the book recommendations. If you’re serious about understanding the position of those of us who say the mind is immaterial, the links I gave you are a far better resource than these remarks.
I don’t think the line of questioning about the brain really gets at what matters here
Clearly. I have been trying to get you to describe how this supposed inaction works and am getting nowhere. I’m beginning to think you can’t.
You start off by discussing some thought process but assume you have established what a thought process is. Instead of talking about adding two numbers, you should be concentrating on the more general: what you believe thinking to be; how it interacts with the body; and how they can cause things like muscle operation.
Pesonally, I think thoughts are the result of brain activity and can provide my reasons for believing so. In and of themselves they are not actions but normal processing.
The questions about the brain actually voice some of my evidence for a pshysical mind but instaed of addressing them you want to ignore them and provide little in return. Instead of making your case, you refer me to “experts” as if you dont quite undestand enough to summarize them.
Until you are willing to answer my questions I will remain unconvinced.
“Interaction” and not “inaction”.
Posting from my phone is tedious and error prone.
I think there’s a confusion here about what an immaterialist needs to show. It is true that a materialist must argue that everything we do can be accounted for by matter, and therefore needs to present a complete philosophy of mind before he can get started. But an immaterialist can readily admit that some thought processes, such as unprocessed sense perception, can be accounted for by matter while others can’t. In fact, logically speaking, all that the immaterialist needs to do is exhibit one activity, whether or not it counts as a thought process, that we can do that no merely material object can do (the immateriality of the mind follows because the explanation of our being able to do such a thing would surely not lie in our material aspect!). Arithmetic qualifies as the sort of activity that will serve our purpose since no merely material thing can do it; that was premise (2) for which I argued above.
In your questions about the brain, you ask for a mechanism to account for the interaction between the immaterial mind and the material body. My point about the imaginary playwright is that this is a bad question, because to insist there is a “mechanism” at all begs the question against the immaterialist. It’s like asking which member of the cast the alleged playwright would have to be in order to have written the play. To the other questions, I would say a thought is a comparison between two concepts in point of their agreement or disagreement in some respect. Considered merely as such, thought does not require a brain, but it ordinarily does for us because we are embodied creatures (rational animals). There is, again, no “mechanism” linking the mind to the body. They are not separate substances such that the one could be the efficient cause of the movements of the other; instead, the mind causes the body to act by setting ends for it to pursue. Finally, the brain is not unique in being able to interact with the mind, since when you see a tree, the tree is present to your mind and therefore interacting with it. This interaction is of course mediated by the body, but that doesn’t negate the fact of the interaction. Still, since you are a separate substance from the tree, the obvious difficulty about how your mind could control it kicks in, so there is no way you could control the tree with your mind alone.
I referred you to experts because I thought we were drifting too far away from AI, and, not being a professional philosopher, it seemed fair to point you in the direction of more detailed resources where you could explore this theory of the mind at leisure and in more careful detail.
all that the immaterialist needs to do is exhibit one activity, whether or not it counts as a thought process, that we can do that no merely material object can do
You haven’t succeeded.
They are not separate substances such that the one could be the efficient cause of the movements of the other; instead, the mind causes the body to act by setting ends for it to pursue.
Sorry but that comes across as gobbledygook. Have you ever been to a Positive Thinking seminar? Here’s a paraphrase from Jonathan Livingston Segull: to get to the next level you have to be there before you even leave. How is your statement materially different?
It causes the body to act by setting goals? Really? How would this be accomplished?
My point about the imaginary playwright is that this is a bad question, because to insist there is a “mechanism” at all begs the question against the immaterialist
A “mechanism” is a way of accomplishing a task. IOW, how does it work? Asking how is begging the question? No wonder you can’t answer.
when you see a tree, the tree is present to your mind and therefore interacting with it
As any guru would tell you. I think they call it Being One with Nature.
Nonsense! What you get is the subjective CONCEPT of the tree; not the tree itself. IOW, your idea of the tree and what it means to you. Nothing else.
Again you talk about the subject of thought and not the thinkimg itself and seemingly can’t see the difference. A bit like explaining TV broadcasting by enumerating what might be broadcast instead of how broadcasting works.
I referred you to experts because I thought we were drifting too far away from AI
Not really. Well, maybe. The current “can be anything” AI perhaps. Can a machine (as a physical entity) think? Your answer seems to be NO. Can animals think? Your answer appears to be MAYBE. Mine is a definite YES to both.
At any rate, thank you for the referral.
@ Tim Simmons,
“I think there’s a confusion here about what an immaterialist needs to show.”
Yes, but the confusion is on your part.
“It is true that a materialist must argue that everything we do can be accounted for by matter, and therefore needs to present a complete philosophy of mind before he can get started.”
Wrong. Materialism is the default position. We know that the material world exists (unless you’re a hard solopsist) and we know that material explanations work because they make accurate predictions. We don’t know that anything immaterial – such as a soul – exists at all. To put this another way, why doesn’t your immaterial explanation also have to come up with a complete theory of mind?
Also, as ‘immaterial’ just means ‘not material’, what’s to stop me inserting my favourite ‘not material’ explanation, such as magic pixies?
“Arithmetic qualifies as the sort of activity that will serve our purpose since no merely material thing can do it”
A computer can do arithmetic.
Of course you’re free to accept or reject my responses to your questions. I still think that both the questions and the (attempted) answers are beside the point. After all, astronomers have given us good reasons for thinking that dark matter exists, but they have no direct observational evidence and they don’t know exactly how it affects ordinary matter. By analogy, it’s hard to see why there couldn’t be good reasons for supposing the mind to be immaterial even without an account of how it interacts with matter. If the analogy’s good, it seems to me that the only way to move the conversation forward is for you (1) to argue against (rather than merely deny) the premises of the argument I presented, and (2) to present an argument (rather than a list of questions) of your own to show that the immateriality of the mind is impossible and not just difficult to understand.
I would say the notion of there being a “default position” is fallacious. It’s the same sort of fallacy people commit when they try to pass the buck by bring up “burdens of proof” and the like. The point is that the “burden of proof” is on anyone who wants to make a claim. To be fair, you claim that materialism is good enough for mind since it makes good predictions. Well, it’s true that physical theories make good predictions about the material world, but it’s not clear (to me) that there are physical theories that reach a similar level of predictive success when it comes to the mind (do you know of any?). However that may be, it’s simply untrue that we don’t know of any immaterial things: thoughts, ideas, intentions, conscious experiences, abstract objects such as numbers, and so forth are not material. Perhaps materialism adequately accounts for these prima facie immaterial objects but, without clear and compelling reasons for thinking materialism to be true and not merely predictively adequate, the theory must remain open to potential counterexamples at every turn.
I’m sure we both agree that chalking up the mind to magic pixies would be going beyond the evidence.
Finally, I gave an argument to show that computers cannot do arithmetic. (Well, near enough. Strictly speaking, the argument is that, at best, there is no fact of the matter about whether or not a material object is actually doing arithmetic when it appears to). Given how they’re designed, it is clear that no computer actually executes “plus” correctly: when the numbers get large enough, the computer must either overflow or run out of memory.
The human mind is immaterial, yet is influenced by the material and communicates through the brain with others, while living here on the earth, and yet also communicates with God in the spirit, Who is Spirit .
If one chooses to not believe that God is, that one has put one’s self in a position of impossibility as far as to whether the mind is material, or is immaterial.
When one’s brain ceases functioning as far as machines and tests can tell, that does not mean the mind ceases to function although it can no longer communicate in the earth, as the mind is spirit.
St. Paul tells us to renew our minds with the mind of Christ. Christ’s mind certainly did not cease to function when His brain did at His death, for He truly died. And remember, that death could not hold Him.
So with AI, it can be a good if used properly.
God bless, C-Marie
astronomers have given us good reasons for thinking that dark matter exists
But they do have reasons. They also are acutely aware they are postulating and thus could be wrong. You seem certain in your convictions and unable to articulate your reasons. The analogy doesn’t hold.
it’s hard to see why there couldn’t be good reasons for supposing the mind to be immaterial even without an account of how it interacts with matter.
1) there could be good reasons for assuming thunder is Thor pounding his hammer. Are they sufficient?
2) there is no evidence whatsoever that anything immaterial exists at all let alone that which can interact with matter.
3) if there are good reasons for immateriality of the mind you are keeping them to yourself.
argue against (rather than merely deny) the premises of the argument I presented
Your argument makes no sense and your conclusion doesn’t follow. You seem to believe that WHAT is being thought about somehow is sufficient evidence for the makeup of the mind. It’s like saying that a TV broadcast of a religious nature somehow proves that all broadcasting is of that nature and the mechanism itself behind broadcasting is religious in nature.
If the mind is indeed material then anything it thinks about is irrelevant to the question and is perforce demonstrating the capabilities of the material. You’re going to have to give a better argument.
I suggest you start by explaining how the mind and brain can interact. Then progress to what thoughts are.
I suppose you are going to say that I am dismissing your argument. I am for the stated reasons. Is it your only argument or are the rest similar?
to present an argument (rather than a list of questions) of your own to show that the immateriality of the mind is impossible
1) That’s just lazy. You are saying you can’t see the statements in a question.
2) I never said impossible; I said unlikely.
3) Some of those questions indeed need answers. I recognize that you can’t answer them thus wish them away.
4) You have yet to make a sensible argument for immateriality yourself
To indulge your laziness:
1) There is a supposed interaction between the material brain and the nonmaterial mind. There don’t seem to be any examples or evidence of such an interaction being possible let alone having actually occurred. There are no examples or evidence of the immaterial anywhere else.
a) What other physical things can interact with the non-physical?
b) What is so special about the brain that allows this interaction?
c) Why is the brain needed at all?
2) The mind unique to a specific brain and the brain cannot establish a similar connection with more than one mind. Why is that?
3) Brain damage affects mental abilities. Why?
4) Malnutrition affects mental abilities. Why?
5) Drugs affect mental abilities. Why?
Yes, I’m aware of the stock answers to the last three but they assume a nonphysical mind. Talk about begging the question.
All of these taken together point toward the mind being a product of the brain. And not just human brains but all brains. As one progresses up the evolutionary ladder, the capabilities become more complex.
we don’t know of any immaterial things: thoughts, ideas, intentions, conscious experiences …
What is your evidence that thoughts are immaterial? What if they were merely energy pulses running along established neural paths?
What is an idea but a thought?
What is an intention but a thought?
All of what you mentioned are merely subjects of thought.
As for “conscious experience” no one knows what that is however the brain consists of many recognizers there is no reason not to think that at least one of them is capable of recognizing self.
Given how [computers were] designed, it is clear that no computer actually executes “plus” correctly: when the numbers get large enough, the computer must either overflow or run out of memory.
Stick to what you know. Computers operate correctly within their capabilities and there are ways around the end points.
Most humans cannot do much arithmetic in their heads either. And those who can are often considered aberrations. Strangely, sometimes a head injury leaves a person with new abilities such as musical genius. Explain that.
Humans are mostly unable to memorize long lists (> 7 items) without resorting to mnemonic tricks. The tricks humans perform extend their capabilities but in essence they are no different than extending a computer’s capability.
@ Tim Simmons,
“I would say the notion of there being a “default position” is fallacious.”
You are claiming that if a material explanation for something can’t be found, then the immaterial explanation wins by default. To avoid this, you’ll have to come up with a proper theory of how this immaterial mind works, complete with testable predictions.
“thoughts, ideas, intentions, conscious experiences, abstract objects such as numbers, and so forth are not material.”
Thoughts, ideas and so on are things the brain does. Numbers are abstracted from material objects and only ever exist in material form.
“the theory must remain open to potential counterexamples at every turn.”
One problem: you can’t produce any evidence for any such counterexamples.
“I’m sure we both agree that chalking up the mind to magic pixies would be going beyond the evidence.”
No, we most definitely do not agree. Unless you can prove that the immaterial exists, then magic pixies are the true explanation.
“Given how they’re designed, it is clear that no computer actually executes “plus” correctly: when the numbers get large enough, the computer must either overflow or run out of memory.”
Who told you this nonsense?
“You seem to believe that WHAT is being thought about somehow is sufficient evidence for the makeup of the mind. It’s like saying that a TV broadcast of a religious nature somehow proves that all broadcasting is of that nature and the mechanism itself behind broadcasting is religious in nature.”–That’s almost right but the analogy is flawed. It’s more like saying religious broadcasts show that TV isn’t limited to sitcoms. Or even better, it’s like saying that the quality and coherence of the dialogue show that the play was written down in advance rather than improvised on the spot.
Your positive arguments (1) and (2) presuppose substance dualism which is not what I’m arguing. For (3)-(5), the facts you mention are not incompatible with the immateriality of the mind, as you admit. (There is no circularity in assuming an immaterial mind here since the arguments are intended to show that (3)-(5) don’t rule out an immaterial mind, not that they positively suggest one).
The notion that thoughts could just be “energy pulses running along established neural paths” is absurd on its face. The thought, “Snow is white” is true, but no energy pulse can be regarded as true or false. Worse than that, an energy pulse cannot having meaning. And the same conclusions hold whatever material thing you suggest to take the place of thoughts. Hence, if thought is any material thing, then our thoughts are meaningless, and even if they were meaningful, they couldn’t be true. If you’re serious about pursuing this sort of reductive materialism, I suggest googling Alex Rosenberg, one of its proponents, to see how hard he has to work to get it off the ground.
Your point about computers doesn’t conflict with what I said. I did not claim that computers do not produce correct results. I said they do not implement true arithmetic, and that is demonstrably true and so well known that every good programmer is aware of the fact and takes it into account in his work.
You again focus on the WHAT of thinking believing it some way indicative of the HOW. The TV broadcasting analogy was to illustrate the fallacy of doing so but even there you went straight for the content.
Merely telling me that materialism addresses my arguments X, Y and Z means nothing. You need to show how it does.
Your arguments are poor indeed. You don’t seem to know your own position beyond a beliefthat it is true. You don’t seem to have anything beyond some focus on thought content. I habe repeatedly asked for the HOW of your immaterial mind but you continue to avoid it.
energy pulses running along established neural paths” is absurd on its face.
Really? It’s what computers use. Are you saying computers are absurd? Strange thing coming from someone obviously employing one.
I did not claim that computers do not produce correct results. I said they do not implement true arithmetic
Completely missing the point that humans also have limitations yet for some reason computer limitations count while human ones don’t.
Give it up. You will never convince DAV he exists.
Sentence should have read:
Merely telling me that immaterialism addresses my arguments X, Y and Z …
Is that what he’s been trying to do?
“When we apply our minds to “plus” two numbers together, there is a fact of the matter about whether we are truly executing “plus” or some other operation such as Kripke’s “quus”; (2) of no material object that seems to execute “plus” is it the case that there is a fact of the matter whether it is truly executing “plus” rather than “quus” or some other operation; (3) therefore, our minds are not material objects.”
Yet remove or damage the physical brain and you cannot plus numbers together any longer (or do much of anything else). This seems to indicate that what you conceptualize as the mind is actually just the regular ol physical brain.
Actually, the poker stuff is interesting because they are applying game theory instead of deep learning to an incomplete information setting. These are still statistical models in some sense, but not really curve fitting.
But the point still holds: closed, static systems are much easier to solve than open, evolving systems, and people are designing the framework within which the computers work.
people are designing the framework within which the computers work.
To date they do. True AI likely won’t happen until what D. Dennet calls the bottom-up approach is used. Bottom-up is closer to how the brain is organized with many similar simple and independent components acting in concert. Currently, this is computationally infeasible. Don’t confuse current capabilities with future ones.
An example of bottom-up, i.e. unplanned, (termite mounds) vs. planned top-down (Gaudi cathedral):
I studied AI years ago, both the ‘frames idea’ and while working for Cigna Insurance, which had started and AI program. Here is a wonderful and short interview by the philosopher that wrote the great “What computers can’t do” 1972 and then 20 years later “What Computers still can’t do” —
Less than 8 minutes but beautiful
I studied AI years ago, both the ‘frames idea’ and while working for Cigna Insurance, which had started an AI program. Here is a wonderful and short interview by the philosopher that wrote the great “What computers can’t do” 1972 and then 20 years later “What Computers still can’t do” —
Less than 8 minutes but beautiful
“What computers can’t do” 1972 and then 20 years later “What Computers still can’t do”
Rather shortsighted. Bet many said the same about this until Igor Sikorsky came along:
Pointing out shortcomings in current technology then implying they will always be with us is usually a bad idea. The brain with it billions of parallel pathways is still far more powerful than any existing computer.
Speaking of the Birthday Party story, understanding English sentences requires a breadth of knowledge that is not possible to achieve with current technology. Plus there are ambiguities that even a human can’t resolve.
For instance, The boy is on the hill with the telescope.
Who has the telescope, the boy or the hill?
Still, strides are being made: