More Monkey Math

The Famous Brett Watson has written a detailed and intelligent response to my entry “Breeding Shakespeare, Not Typing“. In my entry I discussed that while a thousand monkeys typing randomly might not reproduce so much as a single quote from the works of Shakespeare – ever, a thousand monkeys with minimum understanding of the theory of evolution could actually reproduce the entire works of Shakespeare in relatively short time using very simple methods.

With Watson’s permission I post his email response below. He has many good observations. I still have a few objections and intend to answer soon, in the meantime enjoy Brett’s reasoning.

Brett’s text is totally unaltered from his email, but the formatting is mine:


You’re thinking along the same lines as Richard Dawkins here. Have you read any of his work? I quote him in “More Monkey Business” (which you may or may not have read already: see http://www.nutters.org/docs/more-monkeys) as saying that if we can’t believe that something “simple” (like a single living cell) arose by chance, then why should we think that something more complex (like God himself) arose by chance? But I’m not suggesting that anything arose by chance — that’s the point of the mathematics: it demonstrates the weakness of chance as a mechanism for anything informationally significant.

It means nothing to me that a single cell is simpler than a whole man in terms of production by chance-based mechanisms: neither of them will happen in actual practice, because neither is sufficiently simple. The mathematics tells us that we have to look to mechanisms *other than chance*. And “intelligent design” is the only meaningful alternative to chance that I’m aware of.

In his book, “Climbing Mount Improbable“, Dawkins runs a computer simulation along very similar lines as yours (although he does not disclose the exact algorithm) in which he generates the phrase, “Methinks it is like a weasel”, which is another Hamlet quotation. There are problems with this approach, as I see it.

Firstly, it’s cheating. The desired target of the algorithm is hard-coded into the algorithm. The chance that this algorithm will reach the chosen phrase approaches one with an increasing number of iterations: it converges on the goal. Conversely, the chance that it will reach any other phrase approaches
zero with increasing iterations. This is a simulation of a universe which has physical laws predisposed to producing a specific outcome. If the universe actually operated like this, we’d be able to reproduce evolution experimentally in the lab. As it is, evolutionists have to either point to instances of bacteria becoming bacteria as evolution, or claim that the process is two slow to observe or reproduce, without specifying the mathematical basis for that claim.

The mechanism you’ve used only bears a passing resemblance to evolution (as hypothesised) in the wild. For starters, your selection mechanism is perfect: only the two “fittest” organisms in the gene pool get to breed; everyone else dies off. Such perfect selection is far from “natural” — it’s selective breeding.

Also, your mutation rate is very kind relative to the rate of reproduction and death, much like Dawkins’ “weasel” simulation. This part of the analysis is where I will have to get mathematical. I’ll here adopt a 41 character string with 32 possible characters per position, as in my original essay.

Let’s say that your “breeding” rule (#3) results in offspring which have a 50% chance of inheriting a letter from either parent, which seems to have been the intention, although your algorithm is a little more specific than that. Your “mutate” rule (#4) means that the parent strings are placed back in the pool after their 998 offspring are generated, so in the worst case scenario (where all the offspring are worse than the parents), we just re-use the parents. Note already that your process can only stand still or go backwards, so unless the chance of forward progress drops to zero (which it won’t), I can say without further consideration that this process converges on the chosen outcome with probability one.

Each offspring can be (at best) a one character improvement over its parent, so the ideal case results in two errors being fixed per generation: all “correct” letters inherited from parents, plus two other errors fixed in offspring. The inheritance mechanism has big returns in the early phase of evolution, and is less meaningful at the closing phase when everything is thoroughly in-bred. The first generation (purely random) has a statistical frequency of correct characters like so.

First generation correct characters frequency

0 27.2%
1 36.0%
2 23.2%
3 9.7%
4 3.0%
5 0.7%
6 0.1%
7 0.02%
8 0.01%

Given that our first generation has 1000 elements, we can expect the typical best case to be six correct characters, with some possibility of seven, and five being very likely. So a typical case will have a six and a five as “founders” for the population. There will be some overlap between the five and the six, distributed like so.

Founder overlap (for six+five case) frequency
0 43.3%
1 41.9%
2 13.1%
3 1.6%
4+ 0.07%

Thus, a very significant number of the second generation (about 433 on average) will have the best possible case of 11 correct letters. There is a very high probability that two of those high-ranking second generation strings will mutate into strings with 12 correct letters, and thus be candidates for parenthood of the next generation. Assuming that this is the case, those two parents will have at least 11 similar correct letters, and those letters will be fixed for the remainder of the simulation: there is no set of steps whereby a good letter can go bad again, mostly because of the fact that the parents are re-included in the fitness evaluation.

After this initial great leap forwards, the parenting function ceases to provide much benefit, although it makes the theoretical maximum number of improvements per generation two instead of one. It is more or less equivalent to having twice as many single-parent generations after this first step, so far as I can see, and the single-parent case is much simpler to analyse (fewer cases), so I’ll analyse that instead.

Let’s take the single best second generation string, and make it the parent of the third generation, thus skipping the “breed” rule (#3), and generate 999 one-character mutations from it. Assuming our parent has 12 correct characters, the distribution of mutations in the offspring will be like so.

  • Backwards step to 11 correct (a correct character is mutated to incorrect): 28.4%
  • Sideways step to 12 correct (a correct character is mutated to itself, or an incorrect character is mutated to another incorrect character): 69.4%
  • Forwards step to 13 correct (an incorrect character is corrected): 2.2%

Thus, we can expect about 22 third-generation offspring to have 13 correct letters. (The chances of there being *no* improvement are 0.000%) We pick one of the winners at random, and continue on to the next generation. The chances of forwards steps decrease with each additional correct step until we reach the goal (after which every other modification must be neutral or backwards).

Let’s examine the hardest case: where every letter bar one is correct. In this case, we have to choose the correct position to modify (1/41) and also modify it correctly (1/32) in order to have the desired outcome, giving a mere 0.076% chance of success. But we’re producing 999 offspring, so there’s a good chance we’ll succeed even here: 53.3%, in fact. So, in the best case (earlier cases) the chances of improvement are >99.999%, and in the worst case (the last) it’s 53.3%.

In summary, your process has a worst-case chance to advance of greater than 50% per generation, and after an initial leap to around the 12-correct mark, we can expect one or two steps forward per generation with that kind of probability. This is about what we would expect from your empirical data, in which you determined that 25 generations would be sufficient. The best case would be about 15 generations, although that would be pretty unlikely.

Note that your solution scales because it can’t go backwards. It will get harder to finish with increasing length, since that last character gets harder and harder as the string gets longer, but the end is theoretically reachable.

Now let’s do it my way.

The “parents” thing doesn’t make a lot of difference, as we’ve seen, so I’m going to use the simpler case of a single parent producing offspring, like a bacterium (except that bacteria don’t strictly have “offspring”). My evolutionary algorithm goes like this.

  1. Initialise. Start with a random string of the desired length. To save time, make it partially correct, if you so desire.
  2. Reproduce and Mutate. Generate 100 offspring from this founder, each of which differs from its parent by randomly re-writing one letter. Only the offspring are kept: the parent is assumed to have a maximum reproduction ability of 100 offspring before it dies.
  3. Select. Of the 100 offspring, choose one which is the closest to the desired target. In the case of a dead heat, pick any one of the winners at random.
  4. Repeat. The winner becomes the founder of the next generation. Go back to step 2.

Aside from the parenting thing, this differs from your scenario in two ways: parents aren’t immortal, and we have only 100 offspring per generation. The mortality of the parent makes a huge impact on the overall function, because it is now possible that the parent’s offspring will all be worse than the parent, and thus create a backwards step. The chances of this happening depend mostly on the length of the string, relative to the number of offspring the parent can have. Shorter strings and larger numbers of offspring are both helpful.

Let’s examine a 100 character string and 100 offspring per generation. At 99/100 correct, what are the chances of finishing the game? To win, you need to choose the right position (1/100) and the right character (1/32), being a 0.03% chance. We have 100 attempts at it, however, and our chances of getting it on at least one of those attempts is 3.1%. On the other hand, disaster may strike, and we may have a case where every single offspring is a step backwards. The chance per offspring of being a dud is (99/100)*(31/32), or 95.9%, but the chance of them *all* being dud is 1.5%. You can see here that the chance of moving forwards slightly outweighs the chance of moving backwards, so it’s a “downhill” process towards the goal, and we can reasonably expect it to happen.

But what if we up the stakes slightly to a 1000 character string and 100 offspring per generation? The forwards to backwards ratio in this case (at 999 correct characters) is 0.3% to 3.8% (with the remainder being “no change”). So it’s more than ten to one against making progress. But to get to that point, we have to make it to 998, which has a ratio of 0.6% to 3.4%. And to get there we need 997, which has 0.9% to 3.1%. The function is “downhill” up until we get 993 correct, then it’s “uphill” the rest of the way. From the most “downhill” point in the function, your chances of reaching the goal equate to your chances of several consecutive “uphill” steps. In this case, the chances are around 0.1%, meaning that the final trek from “993 correct” to “1000 correct” takes about 1000 generations on average.

What if we up the stakes further to 10,000 characters and 100 offspring per generation? I’m sure you can see where this is headed. In this case the break-even point is at about 9932 correct letters, and it’s uphill the rest of the way. That’s quite a long way from the finish line. According to my calculations, the chance of producing the correct number of consecutive improvements (per attempt) is as follows.

0.000000000000000000000000000000000000000721725%

So this does *not* scale. We’ve got an absolutely perfect artificial selection algorithm (a natural one would be less efficient), and the one-character change is pretty much the ideal grain of selection (changing more than one would make matters worse, not better, and changing nothing at all doesn’t help). All this contrived stuff still only gets us about 93% of the goal for free, with increasing cost per step beyond that point.

One comment

  1. A thought from a complete novice in this area: Is it ok to always select the very best match in a generation? Wouldn’t it more closely resemble evolution if we took 1 (or whatever number of parents is being used) randomly out of the top 10% to build a new generation?

Comments are closed.