I received this thoughtful, and may I say highly accurate, email from reader EN, which I include in full below. I removed the age and name to protect EN’s identity. I also added links where relevant, and corrected a couple of typos my enemies managed to slip into the email as it made its way from EN to me. You will agree by email’s end EN will go far in life.
It gives me great pleasure, as well as excitement to write to you. I am a huge fan of your work, and your philosophy in general.
I especially admired your lecture at the DDP 33rd annual meeting, where you talked about why probability and statistics cannot discover cause, and how the p-value refutes itself. This is my philosophy too, and I was amazed when I knew that you had articles and lectures explaining this philosophy in details.
Firstly…I am a researcher in Artificial General Intelligence. I read your article which was in response to the Quanta Magazine interview with Dr. Judea Pearl, and I agree with all your points regarding how you don’t think machines can have a brain-like system. While it may seem like the nature of my work must disagree with what you’ve stated, but it actually doesn’t, I believe that the best of what we can achieve in that field is a human-like intelligence simulation. Nothing more than that. I have already commenced in building certain blocks of the system. I would definitely love to talk to you about that one day!
I was wondering if you’d have any free time to talk about over-hyped ML and DL systems that are called “AI” these days, and how they are merely a glorified fail when it comes to talking about intelligence. I mean, how can people think that a model that studies the relation between the frequency of a word appearing in a large dataset in relation to another word an intelligent system, and even label it as “Natural Language Processing”? And the same goes to computer vision, since when does studying the label of an image in relation to a vector of features or pixels count as intelligence?
Even when they do say it is AI, do they not understand statistical distributions? Do they not understand that they need huge labeled datasets to study one single distribution and fail all other predictions on same domain but different distribution? It is very ironic, they test their models on the same distribution as that of the training, and call that learning!
It is like me teaching a kid to add up single and double digit numbers, and saying that I’m teaching him addition. The dataset would include something like: 22+11=33, and then 22+9= 31, and then asking what is 22+10? And the model has to approximate an answer between 31 and 33. This is exactly what curve fitting addicts are doing, but calling that, teaching a machine how to add numbers, when in fact, asking the machine what’s 300+200 will yield a garbage answer, because it is outside its data distribution.
Dr. Yann LeCun argues that neural nets can reason. How is that exactly? He also argues that NLP models do understand what they’re doing in “some sense” to quote him. Says that neural nets are capable of capturing causes.
My work is at the intersection of philosophy, psychology, cognitive science, computational neuroscience, knowledge representation, and knowledge-based systems, etc. I am a firm believer in connectionism.
Although I am a firm believer in my own philosophy, and despite being very confident in what is AI and what is not, I sometimes get really discouraged by comments or podcasts like these from people like Dr. LeCun, when I can see clearly the limitations of what they’re doing, and yet people still take their word over mine. At the end of the day, when it comes to prominent figures like Dr. LeCun, or Dr. Andrew Ng, or others for that matter, it is their world, against mine, and no matter how actually talented or good I am at what I do, I am a nobody.
These are just some little random thoughts, I would definitely love to talk to you more about my views on their work, and my own work, and listen to your views, and feedback. I am writing this, without expecting an answer or a feedback, but, a guy can hope. A friend of mine actually calls me Mini-Briggs, because of the same philosophy we adopt!
Thanks again for your time, and for reading this!
Your biggest fan,
EN
Everybody wants in on the AI hotness, so even linear regressions are being called “AI”. And, in truth, they are. They are just as much AI in spirit as any “deep learning” algorithm. They are fitting curves; that, and nothing more. Machine “learning”, AI, neural nets, all the same thing, albeit with more and less clever computer processing. See Statistics Vs. Artificial Intelligence.
It’s been several years since I’ve update this, but it’s still in reasonable shape: Machine Learning, Big Data, Deep Learning, Data Mining, Statistics, Decision & Risk Analysis, Probability, Fuzzy Logic FAQ.
Now LeCun. His favorite movie, it’s claimed in this podcast, is 2001: A Space Odyssey, which I regard as a well-photographed butt-numbing bore, with clever-for-its-time elements. HAL was interesting, but incoherent. The thirty- or forty-minute kaleidoscope shot at the end—does everybody somehow forget this?—is the best anti-drug advertisement ever invented.
This is relevant because LeCun thinks HAL freaked because it was insufficiently guided. I claim any computer, baring electrical short and the like, can only do what it’s told to do. And nothing more. Ever.
LeCun says we put in place laws “to prevent people from doing bad things because [otherwise? fun to new shoe?] they would do these bad things. So we have to shape their cost function, their objective function, if you want, through laws to correct…for those.”
False. Laws don’t stop people from doing anything. Respect for authority and fear of punishment do, to name two. Laws are merely the reference point for both. People do not have cost and objective functions in the utterly and necessarily simplistic way computers do. Most human desires and motivations are, as regular readers know, unquantifiable, and therefore unprogrammable. There is no way to account for the non-material intellect, the appetite of the intellect, i.e. the will. You cannot program a computer to ignore its programming—no matter what word games you might play trying to get around this unhappy fact.
Nevertheless, LeCun says “designing objective functions for people is something we know how to do.” No it isn’t. This sounds like that Cass Sunstein nudging nonsense set in binary.
Of course, much behavior can be controlled, guided, directed in broad ways, through all the classical means we know of. But this is not the same as writing code which explicitly says “When this happens, do this.”
That is all that computers can do: when this, do that. There are two considerations.
First, the list of “when this” can grow long and unmanageable, such that the extent of the “do this” can become unpredictable. To the human mind tracking the operations, that is. Think of finding every possible chess move. Chess is trivial to code with only a tiny number of allowable moves, but the number of possible positions (says one source) is about 10^27586. Dat’s a lotta zeros!
The problem with an “AI” system that allows more moves than chess and has more complex rules will be that nobody will ever know with certainty what the computer is capable of doing. Which is to say, we will always know the list of allowable “do this”, but it will be hard to predict the path to any particular “do this” from the swelling inputs of “when this”. Which makes the idea some scientists have of hooking up all the nukes to an AI system allowed to “push the button” is insane. Ours is an age which specializes in insanity, however.
Second consideration. The computer will always be a dumb beast deterministically carrying out its instructions. This is so even for quantum computers. It is still “when this, do that”, though with quantum computers you get lots of “when this”s at a time. Piling up the lists, or making the whole shebang go faster, does not turn dumb into intelligent. The links (and links within links) above about Pearl make this argument.
The hope some AI researchers have is that we are dumb beasts ourselves. That we, like computers, operate wholly deterministically, albeit with much more complex code. That we are naught but meat machines. That this can’t be so has been proven time and again. But the proofs don’t stick. The desire the proofs are in error is too strong.
Funny, that. That the proofs can be cast aside shows, i.e. proves, that if we were meat machines we can cast aside our programming, and if we can cast aside our programming, we are not meat machines.
The other hilarity, also proving we are not meat machines, is on full display with LeCun, who, as many before him, in effect says, If we can convince people they can’t make free choices, they will make better choices. And LeCun will be there to show us what those better choices are. How he alone is free to jump beyond his own meat-machineness to offer this salvation to mankind he never explains.
To support this site and its wholly independent host using credit card or PayPal (in any amount) click here
As long as you keep claiming that what people, and only people, do with their brains is thinking and whatever a non-person does is not, then your will always be right by your definition.
The problem is that you have no idea what people do when they’re thinking, In fact, you even have no idea what you actually do to initiate walking or other motion — something so much simpler than your definition of thinking. Heck, cockroaches can do it. Saying things like “it’s only doing X (or Y or Z) and thus is not thinking” is a bit arrogant for someone totally clueless about what thinking is let alone how it is accomplished. But then, maybe you get some comfort in claiming it.
As for computers only doing what they are told, the chess and go playing networks weren’t specifically taught game strategies. So it’s amazing they can be good enough to beat people who have devoted their lives to playing those games. What makes you so sure that the networks aren’t doing what people do but just better?
The hope some AI researchers have is that we are dumb beasts ourselves. That we, like computers, operate wholly deterministically, albeit with much more complex code.
So now we come to what you apparently perceive as the actual problem: Free Will vanishes if the workings of the mind become known. Perhaps you can take solace in knowing that not all algorithms are deterministic and, when implemented on a computer, the computer will not be operating “wholly deterministically” either.
that the proofs can be cast aside shows, i.e. proves, that if we were meat machines we can cast aside our programming, and if we can cast aside our programming, we are not meat machines.
Did you mean “that if we were meat machines we cannot cast aside our programming”? Otherwise, I don’t follow.
“The other hilarity, also proving we are not meat machines, is on full display with LeCun, who, as many before him, in effect says, If we can convince people they can’t make free choices, they will make better choices.”
I suppose that LeCun would contend that he cannot not say that. Further, he can’t not say that he cannot not say that… ad infinitum.
/Rant ON/
There is no such thing as quantum computing. At least, not like it’s sold to the public (and most of the professionals who should know better). What there is is computing with a remarkably expensive random number generator. Which you can accomplish quite cheaply by a number of other means. And more quickly, too.
The cause of this, like so much pseudo-scientific woohoo, is the Copenhagen interpretation. It’s utter and complete BS, but is still taught as dogma. And it leads down these nonsensical paths that absorb time and money like a giant sponge.
/Rant OFF/
My 11-yo autistic son is definitely a “meat machine”. He knows the 9×9 multiplication table but couldn’t tell you what’s 300+200, 30×20, or 6×10. I struggle to devise simple, rigid algorithms for him to learn and use; it’s like programming a slow-loading, error-prone computer with a very small working memory.
AI is like the autistic savant, a person who can perform amazing mental feats in one narrow field while lacking the most basic competence at anything else. Which makes me think the neural net has replicated part of the human brain but a key piece is missing.
Martin Luther once met a boy like my son and said, this thing has no soul, you should strangle it. My 8-yo son totally agrees, that little Hitler, but I love all my kids, souled or not.
“Funny, that. That the proofs can be cast aside shows, i.e., proves, that if we were meat machines we can cast aside our programming, and if we can cast aside our programming, we are not meat machines.”
Isn’t this simply false? What is stopping “the list” from containing two contradictory “do this” inputs? I.e., if I am a meat machine, what is stopping me from having as inputs (among many others) both (1) “When X, delete the proofs from yourself” and (2) “When X, keep the proofs”? For the proofs themselves ? our programming.
“I sometimes get really discouraged by comments or podcasts like these from people like Dr. LeCun, when I can see clearly the limitations of what they’re doing, and yet people still take their word…” – guest blogger
Perhaps they aren’t able to think “outside [their own] data distribution” boxes? //;o]
None of us can avoid bumping into the walls of the data distribution boxes we’re all confined in. What makes us human is our ability to 1. become aware of that limitation, and 1. to devise strategies for moving beyond the current walls and enlarging the boxes. No machine can ever do that.
“Now LeCun. His favorite movie, it’s claimed in this podcast, is 2001: A Space Odyssey, which I regard as a well-photographed butt-numbing bore,…” – Briggs
LOL – My son hasn’t seen it, and when it came up in conversation last week I told him that I’d managed to sit through the whole thing, but on leaving the theater I thought to myself “I can’t believe I paid money for that.” I can now relate your review of it as… “…a well-photographed butt-numbing bore, with clever-for-its-time elements”, as evidence that others reacted similarly.
The only other observation I would add is that it was disorienting; no doubt deliberately so.
In fact, you even have no idea what you actually do to initiate walking or other motion
Probably pretty much tha same as a dog or a cockroach or any other entity initiating a purely mechanic sensory input/motor output.
someone totally clueless about what thinking is let alone how it is accomplished.
As long as you lr=eave it blah and undefined, you can make it mwan anything you like. But perhaps you get some comfort from that.
chess and go playing networks weren’t specifically taught game strategies.
Mechanical. even when done by grandmasters.
What makes us human is….
Our ability to abstract univerals?
Wait, I thought I am your biggest fan.
“In fact, you even have no idea what you actually do to initiate walking or other motion
Probably pretty much tha same as a dog or a cockroach or any other entity initiating a purely mechanic sensory input/motor output.” – Y.O.S.
Yes. But an even bigger problem, for materialists, is that as soon as you get the urge to do something, a human can either do it OR NOT do it. And, while we can measure the brain impulse to do something, we can’t measure any decision to refrain from doing it, because there is none – we can chose to not do something, and have no detectable brain wave record of that. (All of section #5 here)
https://youtu.be/BqHrpBPdtSI?t=917
DAV has commented on that video, so I assume he(?) has watched it. If so, then he should have an idea about how free will works. Sure, we have the same mechanism (whatever it is) to initiate action as animals do, but we have some totally unknown mechanism for suppressing that, if we so chose.)
It would be interesting if a study could be funded to see if animals do or do not have the ability to undo a commitment to action, in the way that humans do, assuming one could design a study that would generate interpretable data. (How do you communicate to an animal to chose to avoid acting on any impulse, once it’s initiated?)
@Ye Old Statistician — I don’t know if your keyboard went awry of if you were making a really cool statement.
I assume that you did it on purpose, but I can never truly know…
That just makes it more brilliant
YOS,
As long as you lr=eave it blah and undefined, you can make it mwan anything you like. But perhaps you get some comfort from that.
I’m guessing you think understanding the sentence with its bold misspellings proves something. Natural language has built-in redundancy but even that has limits. Can you tell me what the following sentence says (yes, it’s really a sentence that has been obliterated by noise): CsR2nU1Ik6iyvDVjTQyrwgbXosc?
Even CD players (devices far simpler than brains) correct input stream errors. What’s your point?
[people initiate walking or other motion] Probably pretty much tha same as a dog or a cockroach or any other entity initiating a purely mechanic sensory input/motor output.
You seem to be saying that people are the equivalent of cockroaches and dogs when it comes to initiating motion arising from a decision to move. Since it is purely mechanical you should be able to give us the step by step procedure. I’m particularly interested in the part showing how to go from decision to initiation.
chess and go playing networks weren’t specifically taught game strategies.
Mechanical. even when done by grandmasters.
So playing chess is a purely mechanical activity? Strategy is goal setting. Goals such as trapping the opposing king or gaining control of the board center are not mechanical. Even the tactics for achieving the goals cannot be mechanical as it has to take into account the opponents moves. Remember that these game playing networks 1) examine only a few moves ahead — just like humans and 2) they developed their own strategies for winning.
“The hope some AI researchers have is that we are dumb beasts ourselves. That we, like computers, operate wholly deterministically, albeit with much more complex code. That we are naught but meat machines. That this can’t be so has been proven time and again. But the proofs don’t stick. The desire the proofs are in error is too strong.”
People do have a way of spending much of their lives running from the proof of the truth (Christ’s Resurrection), where it matters most.
So glad that He made us spirit, soul, and body, with free will, sense, reason, and most of all in His Image and Likeness. Onwards and upwards to Heaven everyone!
God bless, C-Marie
An interesting post there Briggs! Good summary of your insights on AI and thought. It would be incredible leap to create a machine that’s conscious and a leap that seems to be impossible for us to do as such.
@ Ye Olde Statistician,
“[What makes us human is] Our ability to abstract univerals?”
Can’t say I’ve noticed many people going around abstracting universals. Isn’t our sense of humour a better choice of defining characteristic?
Great piece. The end reminded me of the seemingly endless parade of bores who insist we have no free will. I always wonder why, if that is the case, they bother to say anything at all– they can’t hope to inform anyone anything, because people cant’ be convinced of what they can’t help but believe or disbelieve. But I suppose these authors literally can’t help themselves…
Communication ALWAYS fails. No matter how persistent we are in defining our terms, no matter how precise we enunciate, spell and grammar check, the comprehension of the words we emote will be misunderstood by everyone. Some people will misunderstand a little less than others. Everyone misunderstands.
The great miracle is that we are still able to keep moving forward. That the rigors we have in place make it possible to query and requery to ascertain if we are getting close to what the communicator is trying to say.
Yet the death toll in the US for auto accidents is not unacceptable. People are driving around with their heads in their phones and still thousands of accidents within a mile of you DID NOT happen.
I swear that I am just trying to point at the white elephant that is sitting in the middle of the room. Everything I read from Briggs seems to point at the same white elephant. It might be a slightly different one that he sees, but damn if it doesn’t look exactly like the one I am pointing at. Ye Old Statistician seems to do the same thing. I am happy to accept that I am wrong (and I am always wrong because no matter what I do, the buckets I use to help me define the world are always wrong, but that is true of ALL buckets made by anyone.)
I will happily sit down with anyone who haunts this forum and yap if only to keep us from being idiots and attempting to inflict our ideals on the system and bring about true despair.
Public Schools are not the terrible places they are made out to be. Private schools suffer all of the problems of public schools. Home schools run into a very dangerous problem with undercapitalization. If you are terrible at math, getting your kids to not be terrible at math is a BIG hurdle.
If a human-like simulation of intelligence is possible, then its should also be possible to simulate a greater than human-like intelligence, which would be smarter than a human ( by definition). You would add processing power. Then you have two possibilities.
1) the simulation also has Will, or a simulation of it. Which means that the simulation will start doing things because it wants to, and those things are being done better than if a human did them.
2) a real human has to provide the will. And that means that we now have a superhuman, because jt will also do things better than a normal human.
Either way, there is something outwitting normal humans. It does not matter much that the AI is a simulation, because a good simulation of intelligence is intelligent too.
Worse, when there is a real human on board, changes are that the thing will wreak havoc. Humans that were augmented by using the intelligence of other humans were capable of causing considerable suffering.
“Isn’t our sense of humour a better choice of defining characteristic?” – sftb
Totally agree.
Though I can’t say I’ve seen many leftists going around being funny (on purpose). //;o]
“Some people will misunderstand a little less than others. Everyone misunderstands.”
The Tower of Babbel was a very, very long time ago, but the people’s behaviour was so nefarious that God not only dispersed them over the whole earth, but also changed their language so that not one understood another.
We are the recipients, also, of that dispersion and language change. Try as we might, even when two agree, if even a little bit of that which the agreement is about is delved into, there will quickly be the suggestion by one or another for a little change, here or there, and one or another will say, “But we just agreed…..”. It is possible that the difference or little change desired by one or the other will not be uttered aloud, but be sure that it will be there.
Even within Catholicism and all of Christianity, there are differences in so much…but not in dogma nor official doctrines are they allowed within Catholicism….but even the Church Fathers saw some things differently from each other. Miracle of God by His Holy Spirit, that all true Christians agree on Who Jesus is, and on what He did, and on what is required for salvation.
God bless, C-Marie
While the criticisms of DL are valid, I think the overall critique of AI is weak, except in a way irrelevant to all but philosophers.
Deep learning is a big statistical matching game, as Briggs says, not that different from using multiple regression, and is thus very limited. But, deep learning is not the entirety of AI, just the current darling. There is an old approach – theorem proving – that is also limited like DL , and was all the rage for a long time. But, combining the two together gets you somewhat past the limitations of each. Now throw in some heuristics, and some other not yet known techniques, and things get better. There is no magic there, and no playing God, just progress.
Machines can learn – they don’t have to just “do what they are programmed to do” except in a nit picky sense. One could analogously argue that humans just do what their genetics programs them to do (leaving aside religious arguments) [caveat… yes, there’s more than genetics, but that isn’t the point].
But, in both cases, the emergent behavior can be extremely interesting. The machine may just be doing what it is programmed to do – in the detailed sense. But the aggregate behavior, which is far more interesting, exceeds the programming.
Even chess playing machines demonstrate this – nobody programmed their final behavior – they learned some of it, and tied that learning to their super-human abilities to analyze future chess moves extremely quickly (if less intuitively than humans), and to remember all of that in vast quantities with perfect precision.
I don’t know how far AI will get, but it will get a lot farther than it has so far. Will it ever truly pass a reasonable “Turing test?” I don’t know – but the “intelligence” may be very useful even if it doesn’t include the ability to simulate a human in conversation.
Machines will probably exceed human abilities to drive cars and fly airplanes – within some constraints. I don’t know if car driving systems will ever be able to reason from the appearance of a soccer ball to the possible sudden arrival of child in the street, but there will be ways to solve that problem in the specific, if not in the sense of true general intelligence.
This Just In…
Robert Marks on A.I.
https://www.discovery.org/multimedia/audio/2019/09/computer-engineer-bob-marks-discusses-the-perils-and-promise-of-ai/
Whatever gives you solace.
Undoubtedly the claim once was that True Flight would never be achieved until machines become like birds that flap their wings, etc so will always be approximations of birds that ‘fly’ for a short distance.
https://img.youtube.com/vi/9yVtGHbmN4s/0.jpg
Then this radical change came along:
https://www.irishtimes.com/polopoly_fs/1.2295905.1437736702!/image/image.jpg_gen/derivatives/ratio_4x3_w1200/image.jpg
If the human mind arises from a purely physical brain then it’s simply a matter of scale until the ‘A’ in ‘AI’ stands for ‘Another’.