NotCH 2: A Digits Puzzle

OK, hands up who thought there was ever gonna be a second NotCH?

We’re not really a puzzle sort of guy, and base ten puzzles in particular tend to bore us. So, this is unlikely to be a regular thing. Still, the following question came up in some non-puzzle reading (upon which we plan to post very soon), and it struck us as interesting, for a couple reasons. And, a request to you smart loudmouths who comment frequently:

Please don’t give the game away until non-regular commenters have had time to think and/or comment. 

Start by writing out a few terms of the standard doubling sequence:

1, 2, 4, 8, 16, 32, 64, etc. 

Now, look at the sequence of final digits: 1, 2, 4, 8, 6, 2, 4, etc. If you keep going, is there a pattern? Can you prove it?

Next, look at the sequence of first digits: 1, 2, 4, 8, 1, 3, 6, etc. If you keep going, is there a pattern? Can you prove it?

Have fun.

Update (06/09/21)

Thanks to the all the commenters. Leaving aside commenter Glen’s pondering on Benford’s Law, we’ll summarise the solution to the intended puzzle and, more interesting to us, the origins and the purpose of the puzzle.

The “puzzle” appears in Teaching Mathematics at Secondary Level, a book by by Tony Gardiner, who also coauthored the great book of mathematical problems that we discussed here. Written in 2016, TMSL is Gardiner’s guide/manifesto to teaching the English Mathematics Curriculum. Being such a commentary, Gardiner’s book does not always stand alone so well, but it has some great stuff, and we’ll post some excerpts in the near future.

Gardiner introduces the doubling sequence when discussing a Curriculum requirement for early secondary school, for students to “reason deductively with number” (p 44). He notes that this requirement is already problematic when considering simple numerical patterns:

“At present the patterns pupils meet are often chosen in a way that misleads everyone into thinking that

patterns that seem genuine, always are genuine.

This makes it hard for pupils to discover the need for proof.”

Then to illustrate, Gardiner lists the first seventeen terms in the doubling sequence (beginning with 2, and without the stupid colours):

2, 4, 8, 16, 32, 64, 128, 256, 512, 1024, 2048, 4096, 8192, 16384, 32768, 65536, 131072

He then extracts the sequences of last (units) digits and first digits:

2 4 8 6    2 4 8 6   2 4 8 6   2 4 8 6

2 4 8 1 3 6 1 2 5 1   2 4 8 1 3 6 1 

So, two apparent patterns. Of course the existence of the first pattern is easy to prove: the units of a number 2N only depends upon the units of N. That implies the units in the doubling sequence must continue to cycle forever.

But, the sequence of first digits is much different, since no such simple proof for the apparent pattern exists. And so the first point, and Gardiner’s point, is

“Somehow pupils need to learn that what looks like a pattern may not be a pattern at all!”

That is a hugely important message. And, that is where Gardiner leaves the “puzzle”, and goes on with his guide/manifesto.

To finish off, we’ll indicate why there appears to be a pattern in the second sequence, and when the pattern disappears. As mystery commenter Gavin noted, the key is 210 = 1024 ≈ 1000. So, unless we’re unlucky, the first digit of 210N (which is the number ten spots along from N in the doubling sequence) will be the same as the first digit of N.

To see what needs to happen to be unlucky, suppose we have a number N with beginning digits ABCDThen, multiplying each of the digits separately by 1024, the digits of 1024N are determined from summing

A    0    2A    4A

      B     0     2B   4B

             C      0     2C    4C

                      D      0     2D     4D

 

So, the first digit of 1204N will be A unless the 2A + C in the third column carries over, and the B in the second column is 8 or 9 accordingly. Or, the 4A + 2B + D in the fourth column is enough for a carry over and … . And so on. Of course, unless God is playing with our heads, this is bound to happen eventually. Which it does:

236 = 68,719,476,736

The first-digit pattern finally breaks at 246, with a 7 instead of a 6.

 

UPDATE (08/09/21)

This is a rewriting and filling in of the mystery Student’s proof in the comments, that there can be no first digit pattern in the doubling sequence. It is long, but hopefully readable step by step.

To set things up, we’re asking whether it is possible for the first digits of the doubling sequence to eventually repeat every \boldsymbol{R} spots:

*) If \color{blue}\boldsymbol{N} is huge then \color{blue}\boldsymbol{2^N} and \color{blue}\boldsymbol{2^{N+R}} always have the same first digit.

(The “huge” here means that we’re allowing it to take a while for the pattern to kick in, and we’re making no guess as to when.)

We’ll prove by contradiction that (*) is impossible. So, to set up the proof, suppose there is such an \boldsymbol{R}, and let’s write \boldsymbol{2^R} in scientific form:

    \[\color{blue}\boldsymbol{2^R = S10^r} \ \mbox{\bf where} \ \ \boldsymbol{1 < S < 10\,.}\]

For example, if \boldsymbol{R=10}, corresponding to our apparent pattern, then \boldsymbol{2^{10} = 1024 = 1.024 \times 10^3}, with  \boldsymbol{S = 1.024}. (Notice that in general \boldsymbol{S=1} is impossible, since this would imply \boldsymbol{2^R} has a final digit of 0.)

Now, how do we go about proving that \boldsymbol{R}, whatever it is, cannot work in (*)? The quantity that really matters is \boldsymbol{S}, and just to make it a little more concrete let’s assume that \boldsymbol{1 < S < 2}, just as for our apparent pattern. (Other ranges of \boldsymbol{S} can be handled in a similar manner. (09/09/21 – see the proof addendum))

With this assumption on \boldsymbol{S} but assuming nothing else, we’ll now show there must be a huge \boldsymbol{N} such that \boldsymbol{2^N} has first digit 1, but \boldsymbol{2^{N+R}} has first digit 2. That is, whatever \boldsymbol{R} is, we can find an \boldsymbol{N} as large as we want that kills the \boldsymbol{R}-repeating pattern, and so proving (*) is false.

Now all of the above is really set up, and it’s time to get down to work. We’re hunting for an \boldsymbol{N} that works as indicated, and let’s write the corresponding hoped-for \boldsymbol{2^N} in scientific form:

    \[\color{blue}\boldsymbol{2^N = T10^n\,.}\]

What would work perfectly is if we could wind up with \boldsymbol{T=2/S}. Then \boldsymbol{2^N} would definitely have first digit 1, and \boldsymbol{T} would be just big enough for \boldsymbol{2^{N+R}} to have first digit 2. Of course obtaining such an equality is too much to hope for. But, what we can do is choose huge \boldsymbol{N} so that \boldsymbol{T} is at most a tiny fraction larger than \boldsymbol{2/S}. To be precise, we’ll show that, whatever \boldsymbol{R} is, we can arrange for a huge \boldsymbol{N} so that

(1)   \[\color{blue}\boldsymbol{\frac2{S} \leqslant T \leqslant \frac2{S}(1+\mbox{\bf tiny}). }\]

So, if “tiny” is tiny enough (relative to \boldsymbol{S}), then we’ll still have \boldsymbol{T<2} (implying \boldsymbol{2^N} has first digit 1), and also \boldsymbol{2\leqslant ST<3} (implying \boldsymbol{2^{N+R}} has first digit 2).

To show that there is such an \boldsymbol{N}, we first show there is a huge \boldsymbol{M} such that \boldsymbol{2^M} is really close to a power of 10. That is, writing \boldsymbol{M} in scientific form,

    \[\color{blue}\boldsymbol{2^M = U10^m\,,}\]

we’ll show that we can arrange

(2)   \[\color{blue}\boldsymbol{1 < U \leqslant (1+\mbox{\bf really tiny}). }\]

How does (2) prove (1)? Well, just keep multiplying \boldsymbol{2^M} by itself. Eventually some power of \boldsymbol{U}, let’s say \boldsymbol{U^K},  will just creep higher than \boldsymbol{2/S}. Then, we can choose  \boldsymbol{N = KM}.

And so, the final final thing to do is show we can find huge \boldsymbol{M} for which (2) holds. But that turns out to be pretty easy.  Start with \boldsymbol{2^H}, some huge power of 2, wherever the \boldsymbol{R} pattern supposedly begins. Now keep taking powers of \boldsymbol{2^H} and write in scientific form:

    \[\color{blue}\boldsymbol{2^{kH} = V_k10^{h_k}\,.}\]

Notice that none of the \boldsymbol{V_k} can equal each other, since, taking the ratio, that would imply a power of 2 ends in a 0. But then we have all these zillions of different \boldsymbol{V_k}, as many as you want, crammed into the interval (1,10). So, there must be a ratio of two of these \boldsymbol{V_k} trapped between 1 and (1 + really tiny). (Technically, this is the pigeonhole principle applied to finitely many intervals of the form \boldsymbol{[(1 + rt)^j, (1+rt)^{j+1}]} covering (1,10)). Then, the corresponding power of 2 gives us our \boldsymbol{2^M}.

UPDATE (090/09/21)

Just a quick addendum to the digits proof. We assumed in the proof that \boldsymbol{S < 2}, and suggested other ranges of \boldsymbol{S} could be handled similarly. In fact, it’s very easy to see if \boldsymbol{R} is such that (*) holds then we must have \boldsymbol{S < 2}. If \boldsymbol{S \geqslant 2} then it is easy to choose \boldsymbol{M} at the end of the proof to ensure \boldsymbol{SU <10}; the only issue is if \boldsymbol{S} is close to 10 but, whatever \boldsymbol{S} is, we can still ensure \boldsymbol{U} is really really close to 1. So, \boldsymbol{2^M} will have first digit 1, and \boldsymbol{2^{M+R}} will definitely have a different first digit.

18 Replies to “NotCH 2: A Digits Puzzle”

  1. Question: (I promise not to post any answers until others have had a good go) what is meant by “prove” in this context? I’m guessing you’re not going to allow me to invent a new type of function for example…?

    1. Thanks, RF. By “prove” I mean to give a convincing argument. So, no need for a formal pure maths style of proof. But, enough of an argument that it’s clear those formal details could somehow be filled in if demanded.

  2. OK, we’re not sure if we scared everybody into silence, or if people are not interested in the puzzle, or if people are simply puzzled by the puzzle.

    So, let me know. Do people want the smart loudmouths to (a) go for it, (b) give hints (presumably starting small), or (c) to remain silent? If no one puts up their hand for (b) or (c), we’ll let the smart loudmouths off the leash.

  3. The pattern in the last digit is quite simple to understand using modular arithmetic.

    I guess that there is no pattern in the first digit but there appears to be because 1024 is approximately equal to 1000. The apparent pattern would break down for large enough powers of two.

    1. Thanks, Gavin. That’s it in a nutshell. We’ll let others add the details if they wish, and we’ll update the post with details soon.

      1. Well, there’s more to your second question… I’m not sure where you were headed with it, so I’ll keep quiet, except perhaps to ask two innocent little questions for interested readers:

        When looking at the sequence of first digits, do you notice any of them occurring more often than others? How often do you conjecture this to happen, and can you prove it?

  4. Hi,

    Gavin is right in that there is no obvious pattern for large enough n for leading digit as 2^10 is slightly larger than 10^3

    For the first 10000 doubles the leading digit pattern will cycle mod 10 as 1.024 ^ 1000 is still less than 2 mod 10

    Steve R

  5. Glen, are you referring to that chestnut of an anecdote about the professor who noticed the early pages of his book of log tables were more well-read than the later ones?

    Also, if you know what I’m talking about – can you remember the name?

  6. > Now, look at the sequence of final digits: 1, 2, 4, 8, 6, 2, 4, etc. If you keep going, is there a pattern? Can you prove it?

    Well for any positive integer k, you’ll have two powers of two which share the same last k digits, because there’s only finitely many strings of k digits. Then once you have two powers of two with the same last few digits, it will repeat, since you’re applying the same operation.

    > Next, look at the sequence of first digits: 1, 2, 4, 8, 1, 3, 6, etc. If you keep going, is there a pattern? Can you prove it?

    My idea is that when we take the base-10 logarithm of, say, N, the decimal places are a sort of ‘signature’ which determines the leading digits of N. Then if we divide [0,1) into ten intervals:

        \[ [0, \log(2)), [\log(2), \log(3)), \dots, [\log(9), 1)\]

    where logarithms are base-10, then the interval which the fractional part of log(N) lies in determines the leading digit of N. So we’re interested in the sequence \{\log(2^i)\}, or \{i\cdot \log(2)\} for i=1, 2, \dots.

    From now on, I’ll be implicitly referring to the fractional parts of numbers rather than their actual quantity, because we only really care about what our numbers are in [0,1).

    If I want to show that the pattern doesn’t repeat, then all I need to do is to show that when we keep adding multiples of log(2), the intervals that we ‘land in’ don’t repeat (we only care about the decimal places). Suppose it did, and that the starting digit of 2^n was the same as 2^{n+C} for all sufficiently large n. Then this means that \{(n+C)\log(2)\} and \{n\log(2)\} are in the same interval, so C\log(2) can’t be greater than the ‘distance’ from n\log(2) to the edge of one of the intervals. But if we choose n so that n\log(2) is smaller than but so close to \log(3) that n\log(2) + C\log(2) would belong to a different interval, this would contradict that the first digits repeated. So I only need to find such an n.

    It seems like because \log(2) is irrational, that I should be able to multiply it by something and end up with something very close to any string of digits. You can prove this using Pigeonhole again! If there was an interval of length 1/K which wasn’t in the sequence \{i\cdot \log(2)\}, i=1,2,\dots then in \{0,\log(2), 2\log(2), \dots, K\log(2)\} there will be two of them in [\frac{a}{K}, \frac{a+1}{K}) for some integer a and we can subtract them to ‘cover’ the 1/K interval.

    So I can choose n so that n\log(2) is just below \log(2) (fractional parts), by choosing a relevant interval of length 1/K. This means that n\log(2) will be in a lower interval than (n+C)\log(2).

    Oh, I guess if C\log(2) is really close to 1 it could loop around to land in the same interval again. In that case… we can choose n so it’s greater than but really close to \log(2) so that n\log(2) lies in [\log(2),\log(3)) but (n+C)\log(2) will fall just short and land in [0, \log(2)).

    Sorry if I’m not really able to explain my ideas rigorously or succinctly. Most of this is just me trying to formalise this idea of eventually landing in a different interval, but I think this means that the first digits won’t repeat.

Leave a Reply

Your email address will not be published.

The maximum upload file size: 128 MB. You can upload: image, audio, video, document, spreadsheet, interactive, text, archive, code, other. Links to YouTube, Facebook, Twitter and other services inserted in the comment text will be automatically embedded. Drop file here

This site uses Akismet to reduce spam. Learn how your comment data is processed.