There is no Class this week, due to events.
Here are the first 100 numbers in a series I constructed:
0.3987654 ,0.9560464 ,0.1675681 ,0.5562341 ,0.9843035 ,0.06160958 ,0.2305416 ,0.7073786 ,0.8254209 ,0.5746259 ,0.9747062 ,0.09831162 ,0.3534914 ,0.9113195 ,0.3222674 ,0.870948 ,0.4482026 ,0.9862148 ,0.0542128 ,0.2044621 ,0.6486213 ,0.9088331 ,0.3303989 ,0.8822106 ,0.4143773 ,0.9676791 ,0.124719 ,0.435309 ,0.9802256 ,0.07729434 ,0.2843992 ,0.8115526 ,0.6098517 ,0.948793 ,0.1937397 ,0.6228901 ,0.9366922 ,0.2364677 ,0.719974 ,0.8039568 ,0.6284954 ,0.9310732 ,0.2559113 ,0.759332 ,0.7287315 ,0.788287 ,0.665502 ,0.887688 ,0.3975611 ,0.9550682 ,0.1711219 ,0.5656058 ,0.9797502 ,0.0791139 ,0.2905201 ,0.821928 ,0.5836424 ,0.9690157 ,0.1197262 ,0.4202663 ,0.9715622 ,0.1101751 ,0.3909359 ,0.9494805 ,0.191277 ,0.6168507 ,0.9424658 ,0.2162265 ,0.6757981 ,0.8736753 ,0.4401045 ,0.982608 ,0.06814707 ,0.2532282 ,0.7540801 ,0.7394838 ,0.7682117 ,0.7100517 ,0.8209714 ,0.586095 ,0.9673557 ,0.1259248 ,0.438912 ,0.9820327 ,0.07036013 ,0.2608308 ,0.7688121 ,0.7087658 ,0.8231189 ,0.5805791 ,0.9710217 ,0.1122067 ,0.3972355 ,0.9548018 ,0.1720886 ,0.5681374 ,0.9784001 ,0.08427255 ,0.30773 ,0.849499.
If you have the ability, and before reading farther, please have a go at telling us the next 10 numbers. If you don’t have the ability, or lack the inclination or time, ponder it a bit anyway and see what you can gather.
Don’t cheat. Think about it first.
Don’t cheat.
Don’t cheat.
The point I want to make is simple, and one I’ve made dozens of times before, but I believe this is the simplest possible illustration. The point is that all models only say what they are told to say. This is true even if they are called AI.
Before I began constructing the series, I had no idea, except the same rough ideas you probably formed, about what any of the numbers would be beyond the first couple. That is, I knew they’d be positive and between 0 and 1, and I had a quick stab at the first few. But beyond that, I did not know.
These numbers were “random” to me, as they are to you. Because “random” only means unknown. Rather, the full cause is not known to me, the model creator. Even though I knew (and know) the model, does not mean I know all its output.
However “random” the numbers appear, they are constructed with a “deterministic” algorithm, i.e. model. Every time you run it, you’ll get precisely the same results. Because I’ve run it half a dozen times, in my chaotic efforts to remember how to automatically print commas and spaces between each number, making them easier to read (if I would have got it right, which I didn’t), I have now memorized the last couple in the series.
And because I know the model, I’m able to make another successful stab at what the numbers 101 and 102 will be before seeing them. After that, I have to concentrate too much, and I’m anyway lazy. But even if I were full of vigor and enthusiasm, I could not figure what numbers 1,001 and 1,002 are, let alone any past this point.
All this is so even though I absolutely positively with no error or fault know the exact precise definite simple, even trivial, model. Here it is, so you will know it, too:
r = 3.987654321
x_0 = r/10
x_t = r * x_t-1 * (1-x_t-1)
I hope that’s readable. It simply states that the next number (x_t) is the constant r times the current number (x_t-1) times one minus the current number. And where we start (x_0) at a tenth of the constant.
This (if you care to know) is called a “logistic map.” A name which is no way is important to us. It is part of a branch of math called “chaos theory”, which is also in no way crucial. “Chaos” only means, mathematically, sensitivity to initial conditions. The model here diverges wildly because it’s extremely sensitive on its starting point. Change even the last digit in r, which is here a 1, to a 2, and numbers 99 and 100 are 0.8685536 and 0.4552635. Which are, relatively speaking, quite different.
That sensitivity merely made it hard for me, or for anybody, to calculate in their head what the numbers in the sequence would be. Our limitation doesn’t change that the numbers are set in amber, fossilized for all time, once we write the model down.
This model told me what I told it to say. And this is so even if I did not know what those numbers would be, not exactly. The model did not go off “on its own”. The model did not innovate. The model obeyed. There was no point at which the model made any choice. The model is dumb, mere gears churning along.
All models are like this. Even AI. At no point in this sequence of numbers does the AI—for we can call any model AI if we like—come alive. There is never any intelligence that will “emerge” from this model. It does not matter that because I did not know in advance what the model output would be that thus the model decided to insert its own numbers. It always said exactly what I told it to say.
This is so even if I multiplied this model buy 100 billion. By that I mean I started with 100 billion different combinations of r and x_0, a number I chose because we hear some AI has that many parameters. All would be the same if I took the output of these 100 billion models and manipulated them in some way, say by taking an average, or whatever. Those are only more steps in the model. The entire thing still only says what I told it to say, and nothing more.
It does not matter if I ask a million people for an r, so that I have no idea what values of r the model will run on. Once the values are given, the model never does more than I told it. Any intelligence in the output would only come from the people being sneaky or intelligent and giving me r that they knew would cause the model to go one way or the other. And if they didn’t know, and just made up their r, the conclusion remains the same. The model only said what I told it to do.
My knowledge, or its lack, of the output in no way means the model didn’t say what I told it to say. All models are like this. All AI is like this.
You are not like this. Even if you say your mind is an algorithm, it is one you largely create. Your mind does what you tell it to do: you tell you.
Here are the various ways to support this work:
- Subscribe at Substack (paid or free)
- Cash App: $WilliamMBriggs
- Zelle: use email: matt@wmbriggs.com
- Buy me a coffee
- Paypal
- Other credit card subscription or single donations
- Hire me
- Subscribe at YouTube
- PASS POSTS ON TO OTHERS
Discover more from William M. Briggs
Subscribe to get the latest posts sent to your email.


Did not read the article, just copy/pasted the number list to ChatGPT and it got right back with:
Yes — this sequence is very well fit by the logistic recurrence
x_(n+1)?rx_n (1-x_n )
with
r ? 3.9876541
In fact, using consecutive terms, the inferred value of r stays essentially constant at about 3.987654, so this is almost certainly the generating rule.
Starting from your last value
X100 = 0.8494990
the next 10 values are approximately:
0.5098234
0.9965287
0.01379421
0.05424776
0.2045864
0.6489141
0.9084857
0.3315313
0.8837371
0.4097148
Because your listed values are rounded, these are also approximate to about the last displayed digit.
Perhaps AI wrote this post?
Post script. Looks like you are hiding well, for now, from the AI in the sky. I asked it if anyone on the web has posted this exact problem and it replied:
Short answer: I could not find your exact sequence posted anywhere on the web,
but the sequence corresponds to a logistic map chaotic sequence rather than a traditional number puzzle.
What your sequence actually is:
Your sequence fits very well to the logistic map recurrence:
x_(n+1) = r x_n (1 – x_n)
with r approximately 3.987654.
The logistic map is a classic nonlinear dynamical system used to demonstrate chaos and has historically
been used as a pseudo-random number generator. This means your list of numbers was almost certainly
generated algorithmically rather than coming from a simple arithmetic or geometric pattern.
Has this exact problem appeared online?
Searches for the exact numbers and parameter did not return an exact match. Therefore it is likely:
1. Someone generated a logistic map sequence and asked to predict next numbers.
2. It may come from a class or textbook problem on chaos.
3. It may be used as a pseudo-random number generator example.
4. The parameter 3.987654 appears intentionally chosen (descending digits), suggesting a constructed problem.
Interesting observation:
The parameter r = 3.987654 is suggestive because the digits descend (3.987654321…), which strongly
suggests the sequence was intentionally constructed as a math puzzle or demonstration.
If this were a typical puzzle question, the intended solution would be:
1. Notice numbers stay between 0 and 1.
2. Try the logistic map formula.
3. Solve for r using consecutive terms:
r = x_(n+1) / (x_n (1 – x_n))
4. Find r ? 3.987654.
5. Use the recurrence to generate future terms.
When brilliant minds join forces with advanced computers that execute tasks with precision and speed by following instructions, they achieve what once seemed impossible.
~ Inspired by a Chinese proverb that says brain and brawn are essential in winning a war.
It’s interesting that the commenters using AI prove the point that the use of a model yielding apparently random results can give reproducible and predictable results when the underlying model is expressly known.
Logic, empiricism, wisdom… These solvents are without impact for people who have, so often, survived them. I believe this is called the “Mithradatic effect”.