Fibonacci Numbers to the Rescue

No, really. This time it’s true. Just wait.

Our favourite mathematics populariser at the moment is Evelyn Lamb. Lamb’s YouTube videos are great because they don’t exist. Evelyn Lamb is a writer. (That is not Lamb in the photo above. We’ll get there.)

It is notoriously difficult to write good popular mathematics (whatever one might mean by “popular”). It is very easy to drown a mathematics story in equations and technical details. But, in trying to avoid that error, the temptation then is to cheat and to settle for half-truths, or to give up entirely and write maths-free fluff. And then there’s the writing, which must be engaging and crystal clear. There are very few people who know enough of mathematics and non-mathematicians and words, and who are willing to sweat sufficiently over the details, to make it all work.

Of course the all-time master of popular mathematics was Martin Gardner, who wrote the Mathematical Games column in Scientific American for approximately three hundred years. Gardner is responsible for inspiring more teenagers to become mathematicians than anyone else, by an order of magnitude. If you don’t know of Martin Gardner then stop reading and go buy this book. Now!

Evelyn Lamb is not Martin Gardner. No one is. But she is very good. Lamb writes the mathematics blog Roots of Unity for Scientific American, and her posts are often surprising, always interesting, and very well written.

That is all by way of introduction to a lovely post that Lamb wrote last week in honour of Julia Robinson, who would have turned 100 on December 8. That is Robinson in the photo above. Robinson’s is one of the great, and very sad, stories of 20th century mathematics.

Robinson worked on Diophantine equations, polynomial equations with integer coefficients and where we’re hunting for integer solutions. So, for example, the equation x2 + y2 = z2 is Diophantine with the integer solution (3,4,5), as well as many others. By contrast, the Diophantine equation x2 + y2 = 3 clearly has no integer solutions.

Robinson did groundbreaking work on Hilbert’s 10th problem, which asks if there exists an algorithm to determine whether a Diophantine equation has (integer) solutions. Robinson was unable to solve Hilbert’s problem. In 1970, however, building on the work of Robinson and her collaborators, the Russian mathematician Yuri Matiyasevich was able solve the problem in the negative: no such algorithm exists. And the magic key that allowed Matiyasevich to complete Robinson’s work was … wait for it … Fibonacci numbers.

Label the Fibonacci numbers as follows:

F1 =1, F2 = 1, F3 = 2, F4 = 3, F5 = 5, F6 = 8, …

It turns out that with this labelling the Fibonacci numbers have the following weird property:

If Fn2 divides Fm then Fn divides m.

You can check what this is saying with n = 3 and m = 6. (We haven’t been able to find a proof online to which to link.) How does that help solve Hilbert’s problem? Read Lamb’s post, and her more bio-ish article in Science News, and find out.

The Super-Rigging of Gambling

Last week, the ABC set to bashing bet365, bringing to light some of the huge betting company’s unsavoury practices. To which we respond, “Well done”. And, “Well, duh”.

The ABC noted a number of dodgy tactics employed by bet365, writing it all up as astonishing revelation. Perhaps the ABC reporters and their cloistered readers were astonished, but many Australian gamblers would have simply yawned. All gambling companies employ similar tactics and they’ve always done it. It is not new and it is not news. It is all part of the standard super-rigging of gambling.

To begin, it is no secret that gambling is rigged; even bad gamblers know that the odds are stacked against them. Mathematically, the rigging of a game is expressed in terms of expectation. In a fair game the average or “expected” win is zero. For example, flipping a coin in the natural win-lose manner is fair. By comparison, roulette has 37 possible outcomes but the payouts are calculated as if there were only 36 numbers. (The payout is “even money” if you bet on “red” or “black”, and the payout is “35 to 1” if you bet on a number.) This implies that the average loss per spin on roulette is 1/37 of the amount bet, or an expectation of about -3%. The expectation being negative indicates the rigging.

Given that gambling institutions intend to offer only rigged, negative expectation games, what can punters do about it? Lots, and not much. They can cheat, of course. Or, they can be become experts on horses or golfers or whatever. Or, they can look for mechanical or human flaws. There’s a surprising number of avenues to explore as well as, of course, many dead ends. (To illustrate the subtlety, we’ve included a few gambling puzzles at the end of the post.) Finding and exploiting opportunities, however, takes work and/or sophistication and/or capital. There’s lunch there, but it’s not free.

So, as a general rule, punters are left with only losing games to play. But how, then, does a gambling site entice a punter to play a game of negative expectation?

Yes, it’s a stupid question. Obviously there’s no shortage of punters willing to bet on appallingly bad games. But, if you run a gambling site, the real question is how to get the punter to gamble on your site. And that’s where one form of super-rigging begins. Super-rigging is making a betting opportunity appear better than it is. This is built in to the way poker machines work, and betting sites do it as a matter of routine.

Betting sites have various ways of enticing punters. To begin, there are sign-up bonuses. So, for example, you might sign up with a $200 deposit and the site will throw in $100 of “free bets”. That’s akin to signing up for ten sessions at a gym and getting a few “free” lessons chucked in. It’s basically fine, with what you see being pretty much what you get. After that, however, there are innumerable betting “promotions”, many blasting out from the TV and destroying everyone’s enjoyment of the footy. (Unless you’re a Saints fan, in which case any distraction from the actual game is considered a plus.)

The effect of gambling promotions is to change the expectation of the bets. For example, a very common offer is “money back” if the punter bets on a horse and that horse comes 2nd or 3rd. (That “money back” is most commonly in the form of a “free bet” equal to the size of the original wager, which is an important distinction but one we can ignore here.) Then, given a good horse may have, say, a 30% chance of coming 2nd or 3rd, an expectation of about -10% may become an expectation of about +20%. There’s no guarantee of winning on that race, of course, but it’s now a sensible bet. These promotions are obviously attractive to punters.

How do the betting sites avoid losing a ton of money on these promotions? Often they don’t have to do much of anything. To begin, most promotions will come with a relatively small maximum bet size, of $50 or so; this is fair enough, just the same as Coles limiting some sale item to “five per customer”. Beyond that, the promotion can be pretty much what it appears to be, in itself a loser for the company but good advertising to get the punters onto the site to bet further. But there are also traps and nasty tricks.

First of all, betting promotions vary dramatically in value, with more than a few being close to worthless. They can be analogous to Motor Heaven blaring that a car is “50% off”, after having doubled the price the previous week. Secondly, even valuable promotions can be used poorly. The horse promotion above, for example, would be essentially worthless if used to bet on a massive favourite or a sluggish also-ran. Again, one might compare this to a commercial situation, say Harvey Norman giving $10 off on any one item in the store and someone using that offer when buying an overpriced TV.

Amidst all the noise, however, there are many good promotions that can create positive expectation on small bets when used intelligently. So, what happens then? Then what happens is what the ABC story is all about.

The gambling sites simply nobble any punter who is not a loser, in any manner they can: they will refuse to offer the promotions; they will limit the size of bets to approximately zero; they will lower the odds. What does that leave? It leaves the betting sites screaming out their offers, everywhere. But, any gambler who is halfway successful is banned from their offers, if not entirely.

And that is the super-rigging. The betting sites pretend they are offering positive expectation, but they will only continue that offer for people who use the offer in a useless manner. And, unlike the other aspects we have mentioned, such nasty practice has no commercial analogy that anyone would regard as acceptable. Imagine going into Harvey Norman and being shoved out the door, with some thug yelling “You only buy items on special, so we don’t want you here”. It is unthinkable at Harvey Norman but, in the context of gambling, it is universal.

How can the betting sites get away with this nastiness? Because the ACCC, the federal body responsible for overseeing and enforcing consumer law, is all bark and no bite. And, because the state governments and government regulators only care about whether they’re getting their cut of the loot.

It is obscene. And, as we indicated, none of it is news.

 

PUZZLES

Here are three gambling puzzles. If you are familiar with the puzzles and are sure you already know the answers, then please refrain from commenting for a while, leaving others free to think about them.

Puzzle 1. You are gambling on roulette, which has 18 red numbers, 18 black numbers and 1 green number (the zero). You watch the wheel spin and the ball lands on a red number. What colour should you bet on next, red or black? Or, doesn’t it matter?

Puzzle 2. A casino gives you a free bet of $10. You can place the bet on any standard casino game, or on a horse, or whatever. If the bet wins, you get your winnings as usual. (For example, if you bet “red” on roulette and win, you’d win $10.) Win or lose, the casino keeps the coupon. How much is the free bet worth?

Puzzle 3. You have found a betting game with positive expectation; it’s win-lose (like betting on “red” or “black” in roulette), but you have a 55% chance of winning and only a 45% chance of losing. You start with $1000 and hope to double your money. What is the probability that you will succeed before losing your $1000?

The NAPLAN Numeracy Test Test

The NAPLAN Numeracy Test Test is intended for education academics and education reporters. The test consists of three questions:

Q1. Are you aware that “numeracy”, to the extent that it is anything, is different from arithmetic and much less than solid school mathematics?

Q2. Do you regard it important to note and to clarify these distinctions?

Q3. Are you aware of the poverty in NAPLAN testing numeracy rather than mathematics?

The test is simple, and the test is routinely failed. NAPLAN is routinely represented as testing the “basics”, which is simply false. As a consequence, the interminable conflict between “inquiry” and “basics” has been distorted beyond sense. (A related and similarly distorting falsity is the representation of current school mathematics texts as “traditional”.) This framing of NAPLAN leaves no room for the plague-on-both-houses disdain which, we’d argue, is the only reasonable position.

Most recently this test was failed, and dismally so, by the writers of the Interim Report on NAPLAN, which was prepared for the state NSW government and was released last week. The Interim Report is short, its purpose being to prepare the foundations for the final report to come, to “set out the major concerns about NAPLAN that we have heard or already knew about from our own work and [to] offer some preliminary thinking”. The writers may have set out to do this, but either they haven’t been hearing or they haven’t been listening.

The Interim Report considers a number of familiar and contentious aspects of NAPLAN: delays in reporting, teaching to the test, misuse of test results, and so on. Mostly reasonable concerns, but what about the tests themselves, what about concerns over what the tests are testing? Surely the tests’ content is central? On this, however, at least before limited correction, the Report implies that there are no concerns whatsoever.

The main section of the Report is titled Current concerns about NAPLAN, which begins with a subsection titled Deficiencies in tests. This subsection contains just two paragraphs. The first paragraph raises the issue that a test such as NAPLAN “will” contain questions that are so easy or so difficult that little information is gained by including them. However, “Prior experimental work by ACARA [the implementers of NAPLAN] showed that this should be so.” In other words, the writers are saying “If you think ACARA got it wrong then you’re wrong, because ACARA told us they got it right”. That’s just the way one wishes a review to begin, with a bunch of yes men parroting the organisation whose work they are supposed to be reviewing. But, let’s not dwell on it; the second paragraph is worse.

The second “deficiencies” paragraph is concerned with the writing tests. Except it isn’t; it is merely concerned with the effect of moving NAPLAN online to the analysis of students’ tests. There’s not a word on the content of the tests. True, in a later, “Initial thinking” section the writers have an extended discussion about issues with the writing tests. But why are these issues not front and centre? Still, it is not our area and so we’ll leave it, comfortable in our belief that ACARA is mucking up literacy testing and will continue to do so.

And that’s it for “deficiencies in tests”, without a single word about suggested or actual deficiencies of the numeracy tests. Anywhere. Moreover, the term “arithmetic” never appears in the Report, and the word “mathematics” appears just once, as a semi-synonym for numeracy: the writers echo a suggested deficiency of NAPLAN, that one effect of the tests may be to “reduce the curriculum, particularly in primary schools, to a focus on literacy/English and numeracy/mathematics …”. One can only wish it were true.

How did this happen? The writers boast of having held about thirty meetings in a four-day period and having met with about sixty individuals. Could it possibly be the case that not one of those sixty individuals raised the issue that numeracy might be an educational fraud? Not a single person?

The short answer is “yes”. It is possible that the Report writers were warned that “numeracy” is snake oil and that testing it is a foolish distraction, with the writers then, consciously or unconsciously, simply filtering out that opinion. But it is also entirely possible that the writers heard no dissenting voice. Who did the writers choose to meet? How were those people chosen? Was the selection dominated by the predictable maths ed clowns and government hacks? Was there consultation with a single competent and attuned mathematician? It is not difficult to guess the answers.

The writers have failed the test, and the result of that failure is clear. The Interim Report is nonsense, setting the stage for a woefully misguided review that in all probability will leave the ridiculous NAPLAN numeracy tests still firmly in place and still just as ridiculous.

A PISA Crap

The PISA results were released on Tuesday, and Australians having been losing their minds over them. Which is admirably consistent: the country has worked so hard at losing minds over the last 20+ years, it seems entirely reasonable to keep on going.

We’ve never paid much attention to PISA. We’ve always had the sense that the tests were tainted in a NAPLANesque manner, and in any case we can’t imagine the results would ever indicate anything about Australian maths education that isn’t already blindingly obvious. As Bob Dylan (almost) sang, you don’t need a weatherman to know which way the wind is blowing.

And so it is with PISA 2018. Australia’s mathematical decline is undeniable, astonishing and entirely predictable. Indeed, for the NAPLANesque reasons suggested above, the decline in mathematics standards is probably significantly greater than is suggested by PISA. Greg Ashman raises the issue in this post.

So, how did this happen, and what are we to do? Unsurprisingly, there has been no reluctance from our glorious educational leaders to proffer warnings and solutions. AMSI, of course, is worrying their bone, whining for about the thirtieth time about unqualified teachers. The Lord of ACER thinks that Australia is focusing too much on “the basics”, at the expense of “deep understandings”. If only the dear Lord’s understanding was a little deeper.

Others suggest we should “focus systematically on student and teacher wellbeing“, whatever that means. Or, we should reduce teachers’ “audit anxiety“. Or, the problem is “teachers [tend] to focus on content rather than student learning“. Or, the problem is a “behaviour crisis“. Or, we should have “increased scrutiny of university education degrees” and “support [students’] schooling at home”. And, we could introduce “master teachers”. But apparently “more testing is not the answer“. In any case, “The time for talk is over“, according to a speech by Minister Tehan.

Some of these suggestions are, of course, simply ludicrous. Others, and others we haven’t mentioned, have at least a kernel of truth, and a couple we can strongly endorse.

No institution we can see, however, no person we have read, seems ready to face up to the systemic corruption, to see the PISA results in the light of the fundamental perversion of mathematics education in Australia. Not a word we could see questioning the role of calculators and the fetishisation of their progeny. Not a note of doubt about the effect of computers. Not a single suggestion that STEM may not be an antidote but, rather, a poison. Barely a word on the “inquiry” swampland that most primary schools have become. And, barely a word on the loss of discipline, on the valuable and essential meanings of that word. What possible hope is there, then, for meaningful change?

We await PISA 2021 with unbated breath.

The Last Picture Show

The AustMS Education Afternoon is done and dusted. Thanks to our fellow speakers, and in particular to David Treeby, who bucked the trend and offered something of genuine value. And, thanks to all those who turned up. It was great to see some old faces, and to meet some new ones. One should also acknowledge AAMT and AMSI and MAV. The effort these institutions made to promote the event is noted and is reassuring.

The plan is to write some posts based on our presentation, in the near future. That’s perhaps not as entertaining as a live delivery from a vodka-infused Marty, but we’ll do what we can.

As for future presentations, we very much doubt it. In all likelihood, that was the last picture show.

MoP 2 : A One-Way Conversation

We’re not particularly looking to blog about censorship. In general, we think the problem (in, e.g., Australia and the US) is overhyped. The much greater problem is self-censorship, where the media and the society at large can’t think or write about what they fail to see; so, for example, a major country can have a military coup, but no one seems to notice. Sometimes, however, the issue is close enough to home and the censorship is sufficiently blatant, that it seems worth noting.

Greg Ashman, who we had cause to mention recently, has been censored in a needless and heavy-handed manner by Sasha Petrova, the education editor of The Conversation. The details are discussed by Ashman here, but it is easy to give the story in brief.

Kate Noble of the Mitchell Institute wrote an article for The Conversation, titled Children learn through play – it shouldn’t stop at pre-school. As the title suggests, Noble was arguing for more play-based learning in the early years of primary school. Ashman then added a (polite and referenced and carefully worded) comment, noting Noble’s failure to distinguish between knowledge that is more susceptible or less susceptible to play-based learning, and directly querying one of Noble’s examples, the possible learning benefits (or lack thereof) of playing with water. Ashman’s comment, along with the replies to his comment, was then deleted. When Ashman emailed Petrova, querying this, Petrova replied:

“Sure. I deleted [Ashman’s comment] as it is off topic. The article doesn’t call for less explicit instruction, nor is there any mention of it. It calls for more integration of play-based learning in early years of school to ease the transition to formal instruction – not that formal instruction (and even here it doesn’t specify that formal means “explicit”) must be abolished.”

Subsequently, it appears that Petrova has also deleted the puzzled commentary on the original deletion. And, who knows what else she has deleted? Such is the nature of censorship.

In general we have a lot of sympathy for editors, such as Petrova, of public fora. It is very easy to err one way or the other, and then to be hammered by Team A or Team B.  Indeed, and somewhat ironically, Ashman had a post just a week ago that was in part critical of The Conversation’s new policy towards climate denialist loons; in that instance we thought Ashman was being a little tendentious and our sympathies were much more with The Conversation’s editors.

But, here, Petrova has unquestionably screwed up. Ashman was adding important, directly relevant and explicitly linked qualification to Noble’s article, and in a properly thoughtful and collegial manner. Ashman wasn’t grandstanding, he was contributing in good faith. He was conversing.  Moreover, Petrova’s stated reason for censoring Ashman is premised on a ludicrously narrow definition of “topic”, which even on its own terms fails here, and in any case has no place in academic discourse or public discourse.

Petrova, and The Conversation, owes Ashman an apology.

Implicit Suggestions

One of the unexpected and rewarding aspects of having started this blog is being contacted out of the blue by students. This included an extended correspondence with one particular VCE student, whom we have never met and of whom we know very little, other than that this year they undertook UMEP mathematics (Melbourne University extension). The student emailed again recently, about the final question on this year’s (calculator-free) Specialist Mathematics Exam 1 (not online). Though perhaps not (but also perhaps yes) a WitCH, the exam question (below), and the student’s comments (belower), seemed worth sharing.

Hi Marty,

Have a peek at Question 10 of Specialist 2019 Exam 1 when you get a chance. It was a 5 mark question, only roughly 2 of which actually assessed relevant Specialist knowledge – the rest was mechanical manipulation of ugly fractions and surds. Whilst I happened to get the right answer, I know of talented others who didn’t.

I saw a comment you made on the blog regarding timing sometime recently, and I couldn’t agree more. I made more stupid mistakes than I would’ve liked on the Specialist exam 2, being under pressure to race against the clock. It seems honestly pathetic to me that VCAA can only seem to differentiate students by time. (Especially when giving 2 1/2 hours for science subjects, with no reason why they can’t do the same for Maths.) It truly seems a pathetic way to assess or distinguish between proper mathematical talent and button-pushing speed writing.

I definitely appreciate the UMEP exams. We have 3 hrs and no CAS! That, coupled with the assignments that expect justification and insight, certainly makes me appreciate maths significantly more than from VCE. My only regret on that note was that I couldn’t do two UMEP subjects 🙂