Since is Silly Saturday, a few fun back-of-the-envelope calculations on simulating a brain. I’m drawing from the marvelous, must-read (go do it now) essay “The empty brain: Your brain does not process information, retrieve knowledge or store memories. In short: your brain is not a computer” by Robert Epstein.
(Update: About what I mean by “simulating”, see the exchange with DAV below.)
Lots to mine from this article, many fascinating implications, which we’ll come back to in the future. For now, what about the idea that we can “simulate” a brain in the Ray Kurzweil sense of being able to “download” a man onto a chip. Can we quantify the scope of the problem?
Think how difficult this problem is. To understand even the basics of how the brain maintains the human intellect, we might need to know not just the current state of all 86 billion neurons and their 100 trillion interconnections, not just the varying strengths with which they are connected, and not just the states of more than 1,000 proteins that exist at each connection point, but how the moment-to-moment activity of the brain contributes to the integrity of the system. Add to this the uniqueness of each brain, brought about in part because of the uniqueness of each person’s life history, and Kandel’s prediction starts to sound overly optimistic.
Okay, that’s 100 trillion interconnections, or 1013, times the number of active proteins (103) at each connection, and we’re at 1016 degrees of freedom at a minimum. For each “moment” of action. (The basic “step time” unit is microseconds or smaller.)
And this is only in the brain itself, and doesn’t include the rest of the nervous system (which, in our metaphor, makes it sound like a separate entity) and it’s connections. Then add That Which We Do Not Yet Know about workings we should be modeling but aren’t, and can’t because, by definition, we don’t know what they are, and we’re probably at 1020 (see inter alia “Blood exerts a powerful influence on the brain“). At the least. I’m only guessing. You can make your own guess. I’m doing all this on one cup of coffee. Mistakes will be made.
All this is happening in three-dimensions. Proteins move. Chemicals swap electrons at the connections between synapses and between nerve cells and other cells in the body. And so on. This adds several more orders of magnitude. A wild guess here, which I’m happy to disdain upon cogent criticism, but I’d say, for fun, about 1,000 degrees per protein, though maybe up to a million. We’re up to 1023~1026. And this on on the low end. Think of it as A Very Best Case Scenario.
Now computers are (at this point still) made of transistors. How many transistors does it take to model the actions of one protein? Well, an Intell Quad-core + GPU Core i7, an everyday processor, has 1.4 x 10 9, and this is enough to do one protein. Not very speedily, but it can do it with some power left over. Is one i7 enough to do two proteins interacting? I’m not an expert.
What we’re after is the number of processors it takes to simulate those 1023+ degrees of freedom. Say a billion for each degree of freedom. That puts us in need of 1031+ transistors, at a raw minimum, to fully simulate the organism which is a brain (and its connections). This simulate ignores vast areas of a human being, of course. But let’s pretend those areas don’t matter.
I’ve mixed up the time component in there surreptitiously by speaking of modeling a protein. Probably this is another underestimation. Probably a laughable underestimation. It must be, because those proteins are made of finer stuff, all of which has to be taken into account. I wouldn’t be shocked if 10100 (or more) is the right answer.
If we believe Moore’s “Law”, the number of transistors on a chip doubles every two years. We have a billion now and want to arrive at 1031+. I make it something less than a century (73 years). Maybe less if “quantum” computers fulfill any of their promises, maybe more depending on how badly I’ve botched the above calculations. All assuming Moore doesn’t break down and become logarithmic, which every single innovation in human history has done.
(Incidentally, I’m betting Moore lasts only twenty years more, or even fewer, before the vigor is gone.)
Of course, that’s on one “chip”. We can string processors together and reach the goal faster. Right now we’d need 1022 i7 computers linked up. That’s a big number.
So, plus or minus, it’s a century from now (or even two or three) and we might have the computer muscle, and the intelligence enough to figure out how to program such a monstrosity. No small thing, either, because many of the interactions we’ll have to model are quantum mechanical, and nobody in the world has any idea—as in NONE, even if the computers are quantum computers—of what actualizes potential states of QM objects. Are these potentia actualized one-by-one? Or is there coordination, which is to say a sort of entanglement, between some, a few, all of the elements? Not only do we not know this, I think we cannot know this.
Anyway, forget the insurmountable difficulty. We’ve got the thing. We switch it on and…
It still won’t work. “Brains”, which is to say the organisms (in their entirety) which are us, are not solely material. Our intellects are not just the physical stuff which makes us up. We are more than animated dust.
This sad finding destroys some science fictional concepts, but it invites new ones.