The Slanted Tower of PISA

Here’s an interesting tidbit: PISA‘s mathematics testing doesn’t test mathematics. Weird, huh? Who knew?

Well, we kinda knew. Trustworthy colleagues had suggested to us that PISA was slanted, but finding out the extent of that slant, like lying-on-the-ground slant, was genuinely surprising. (We’re clearly just too optimistic about the world of education.) Not that we had any excuse for being surprised; there were clues of mathematical crime in plain sight, and it was easy enough to locate the bodies.

The first clues are on PISA’s summary page on “Mathematics Performance“. The title is already a concern; qualifications and elaborations of “mathematics” usually indicate some kind of dilution, and “performance” sounds like a pretty weird elaboration. Perhaps “mathematics performance” might be dismissed as an eccentricity, but what follows cannot be so dismissed. Here is PISA’s summary of the meaning of “mathematical performance”:

Mathematical performance, for PISA, measures the mathematical literacy of a 15 year-old to formulate, employ and interpret mathematics in a variety of contexts to describe, predict and explain phenomena, recognising the role that mathematics plays in the world. The mean score is the measure. A mathematically literate student recognises the role that mathematics plays in the world in order to make well-founded judgments and decisions needed by constructive, engaged and reflective citizens.

The alarms are set off by “mathematical literacy”, a pompous expression that promises more than, while signalling we’ll be getting much less than, straight mathematics. All doubt is then ended with the phrase “the role that mathematics plays in the world”, which is so fundamental that it is repeated verbatim.

What this sums to, of course, is numeracy, the noxious weed that inevitably chokes everything whenever there’s an opportunity to discuss the teaching of mathematics. What this promises is, akin to NAPLAN, PISA’s test of “mathematical performance” will centre on shallow and contrived scenarios, presented with triple the required words, and demanding little more than simple arithmetic. Before investigating PISA’s profound new world, however, there’s another aspect of PISA that really could do with a whack.

We have been told that the worldly mathematics that PISA tests is needed by “constructive, engaged and reflective citizens”. Well, there’s nothing like irrelevant and garishly manipulative salesmanship to undermine what you’re selling. The puffing up of PISA’s “world” mathematics has no place in what should be a clear and dispassionate description of the nature of the testing. Moreover, even on its own terms, the puffery is silly. The whole point of mathematics is that it is abstract and transferrable, that the formulas and techniques illustrated with one setting can be applied in countless others. Whatever the benefits of PISA’s real world mathematics for constructive, engaged and reflective citizens, there will be the exact same benefits for destructive, disengaged psychopaths. PISA imagines Florence Nightingale calculating drip rates? We imagine a CIA torturer calculating drip rates.

PISA’s flamboyent self-promotion seems part and parcel of its reporting. Insights and Inpretations, PISA’s summary of the 2018 test results, comes served with many flavours of Kool-Aid. It includes endless fussing about “the digital world” which, we’re told, “is becoming a sizeable part of the real world”. Reading has changed, since it is apparently “no longer mainly about extracting information”. And teaching has changed, because there’s “the race with technology”. The document wallows in the growth mindset swamp, and on and on. But not to fear, because PISA, marvellous PISA, is on top of it, and has “evolved to better capture these demands”. More accurately, PISA has evolved to better market itself clothed in modern educational fetishism.

Now, to the promised crimes. The PISA test is administered to 15 year old students (typically Year 9 or, more often, Year 10 in Australia). What mathematics, then, does PISA consider worth asking these fifteen year olds? PISA’s tests questions page directs to a document containing questions from the PISA 2012 test, as well as sample questions and questions from earlier PISAs; these appear to be the most recent questions made publicly available, and are presumably representative of PISA 2018. In total, the document provides eleven scenarios or “units” from the PISA 2012 test, comprising twenty-six questions.

To illustrate what is offered in those twenty-six questions from PISA 2012, we have posted two of the units here, and a third unit here. It is also not difficult, however, to indicate the general nature of the questions. First, as evidenced by the posted units, and the reason for posting them elsewhere, the questions are long and boring; the main challenge of these units is to suppress the gag reflex long enough to digest them. As for the mathematical content, as we flagged, there is very little; indeed, there is less mathematics than there appears, since students are permitted to use a calculator. Predictably, every unit is a “context” scenario, without a single straight mathematics question. Then, for about half of the twenty-six questions, we would categorise the mathematics required to be somewhere between easy and trivial, involving a very simple arithmetic step (with calculator) or simple geometric idea, or less. About a quarter of the questions are computationally longer, involving a number of arithmetic steps (with calculator), but contain no greater conceptual depth. The remaining questions are in some sense more conceptual, though that “more” should be thought of as “not much more”. None of the questions could be considered deep, or remotely interesting. Shallowness aside, the breadth of mathematics covered is remarkably small. These are fifteen year old students being tested, but no geometry is required beyond the area of a rectangle, Pythagoras’s theorem and very simple fractions of a circle; there is no trigonometry or similarity; there is no probability; there are no primes or powers or factorisation; there are no explicit functions, and the only implicit functional behaviour is linear.

Worst of all, PISA’s testing of algebra is evidently close to non-existent. There is just one unit, comprising two questions, requiring any algebra whatsoever. That unit concerns a nurse (possibly a CIA torturer) calculating drip rates. Minus the tedious framing and the pointless illustration, the scenario boils down to consideration of the formula

D = dv/(60n) .

(The meaning of the variables and the formula needn’t concern us here, although we’ll note that it takes a special type of clown to employ an upper case D and a lower case d in the same formula.)

There are two questions on this equation, the first asking for the change in D if n is doubled. (There is some WitCHlike idiocy in the suggested grading for the question, but we’ll leave that as a puzzle for the reader.) For the second question (labelled “Question 3” for God knows what reason), students are given specific, simple values of D, d and n, and they are required to calculate v (with a calculator). That’s it. That is the sum total of the algebra on the twenty-six questions, and that is disgraceful.

Algebra is everything in mathematics. Algebra is how we name the quantity we’re after, setting the stage for its capture. Algebra is how we signify pattern, allowing us to hunt for deeper pattern. Algebra is how we indicate the relationship between quantities. Algebra is how Descartes captured geometry, and how Newton and Leibniz captured calculus.

It is not difficult to guess why PISA sidelines algebra, since it is standard, particularly from numeracy fanatics, to stereotype algebra as abstract, as something only within mathematics. But of course, even from PISA’s blinkered numeracy perspective, this is nonsense. You want to think about mathematics in the world? Then the discovery and the analysis of patterns, and the analysis of relationships, of functions is the heart of it. And what makes the heart beat is algebra.

Does PISA offer anything of value? Well, yeah, a little. It is a non-trivial and worthwhile skill to be able to extract intrinsically simple mathematics from a busy and wordy scenario. But it’s not that important, and it’s hardly the profound “higher order” thinking that some claim PISA offers. It is a shrivelled pea of an offering, which completely ignores vast fields of mathematics and mathematical thought.

PISA’s disregard of algebra is ridiculous and shameful, the final stake in PISA’s thoroughly nailed coffin. It demonstrates that PISA isn’t “higher” or “real”, it is just other, and it is an other we would all be much better off without.

INtHiTS 3: ScoMoFo Is Told Where to Go

scottmarsh.com.au

Australia is on fire. Which has led to pretty much the whole country, and a good part of the planet, telling Scott “Morals” Morrison to go fuck himself. But supposedly Morals “understands“, and can see why people “fixate“, and “doesn’t take these things personally“.

You’re wrong, Morals. It is personal. Many, many people are disgusted by your person. They are disgusted because you’re a sanctimonious, unprincipled, greasy huckstering halfwit who deserves to fry in Hell if only for the sheer loathsome meaninglessness of your government. Fuck you, fuck the mining lizardmen and Murdoch gargoyles who cover for you, and fuck all the dumb fucks who allowed themselves to be conned into voting for you.

Foundation Stoned

The VCAA is reportedly planning to introduce Foundation Mathematics, a new, lower-level year 12 mathematics subject. According to Age reporter Madeleine Heffernan, “It is hoped that the new subject will attract students who would not otherwise choose a maths subject for year 12 …”. Which is good, why?

Predictably, the VCAA is hell-bent on not solving the wrong problem. It simply doesn’t matter that not more students continue with mathematics in Year 12. What matters is that so many students learn bugger all mathematics in the previous twelve years. And why should anyone believe that, at that final stage of schooling, one more year of Maths-Lite will make any significant difference?

The problem with Year 12 that the VCAA should be attempting to solve is that so few students are choosing the more advanced mathematics subjects. Heffernan appears to have interviewed AMSI Director Tim Brown, who noted the obvious, that introducing the new subject “would not arrest the worrying decline of students studying higher level maths – specialist maths – in year 12.” (Tim could have added that Year 12 Specialist Mathematics is also a second rate subject, but one can expect only so much from AMSI.)

It is not clear that anybody other than the VCAA sees any wisdom in their plan. Professor Brown’s extended response to Heffernan is one of quiet exasperation. The comments that follow Heffernan’s report are less quiet and are appropriately scathing. So who, if anyone, did the VCAA find to endorse this distracting silliness?

But, is it worse than silly? VCAA’s new subject won’t offer significant improvement, but could it make matters worse? According to Heffernan, there’s nothing to worry about:

“The new subject will be carefully designed to discourage students from downgrading their maths study.”

Maybe. We doubt it.

Ms. Heffernan appears to be a younger reporter, so we’ll be so forward as to offer her a word of advice: if you’re going to transcribe tendentious and self-serving claims provided by the primary source for and the subject of your report, it is accurate, and prudent, to avoid reporting those claims as if they were established fact.

A PISA With Almost the Lot

At current count, there have been two thousand, one hundred and seventy-three reports and opinion pieces on Australia’s terrific PISA results. We’ve heard from  a journalist, a former PISA director, the Grattan Institute, the Gonski Institute, the Mitchell Institute, ACER, the Innovative Research University Group, The Centre for Independent Studies, the AMSI Schools Project Manager, the Australian Association of Mathematics Teachers, the Australian Science Teachers Association, Learning First, an education journalist, an education editor, an education lecturer, a psychometrician, an education research fellowa lecturer in educational assessment, an emeritus professor of education, a plethora of education academics,  a shock jock, a shock writer, a federal education minister, a state education minister, another state education minister, a shadow education ministeran economist,  a teacher and a writer.

So, that’s just about everyone, right?

A Quick Message for Holden and Piccoli

A few days ago the Sydney Morning Herald published yet another opinion piece on Australia’s terrific PISA results. The piece was by Richard Holden, a professor of economics at UNSW, and Adrian Piccoli, formerly a state Minster for Education and now director of the Gonski Institute at UNSW. Holden’s and Piccoli’s piece was titled

‘Back to basics’ is not our education cure – it’s where we’ve gone wrong

Oh, really? And what’s the evidence for that? The piece begins,

A “back to basics” response to the latest PISA results is wrong and ignores the other data Australia has spent more than 10 years obsessing about – NAPLAN. The National Assessment Program – Literacy and Numeracy is all about going back to basics ...

The piece goes on, arguing that the years of emphasis on NAPLAN demonstrate that Australia has concentrated upon and is doing fine with “the basics”, and at the expense of the “broader, higher-order skills tested by PISA”.

So, here’s our message:

Dear Professors Holden and Piccoli, if you are so ignorant as to believe NAPLAN and numeracy is about “the basics”, and if you can exhibit no awareness that the Australian Curriculum has continued the trashing of “the basics”, and if you are so stuck in the higher-order clouds to be unaware of the lack of and critical need for properly solid lower-order foundations, and if you can write an entire piece on PISA without a single use of the words “arithmetic” and “mathematics” then please, please just shut the hell up and go away.

The Dunning-Kruger Effect Effect

The Dunning-Kruger effect is well known. It is the disproportionate confidence displayed by those who are less competent or less well informed.

Less well known, and more pernicious, is the Dunning-Kruger Effect effect. This is the disproportionate confidence of an academic clique that considers criticism of the clique can only be valid if the critic has read at least a dozen of the clique’s self-indulgent, jargon-filled papers. A clear indication of the Dunning-Kruger Effect effect is the readiness to chant “Dunning-Kruger effect”.

Fibonacci Numbers to the Rescue

No, really. This time it’s true. Just wait.

Our favourite mathematics populariser at the moment is Evelyn Lamb. Lamb’s YouTube videos are great because they don’t exist. Evelyn Lamb is a writer. (That is not Lamb in the photo above. We’ll get there.)

It is notoriously difficult to write good popular mathematics (whatever one might mean by “popular”). It is very easy to drown a mathematics story in equations and technical details. But, in trying to avoid that error, the temptation then is to cheat and to settle for half-truths, or to give up entirely and write maths-free fluff. And then there’s the writing, which must be engaging and crystal clear. There are very few people who know enough of mathematics and non-mathematicians and words, and who are willing to sweat sufficiently over the details, to make it all work.

Of course the all-time master of popular mathematics was Martin Gardner, who wrote the Mathematical Games column in Scientific American for approximately three hundred years. Gardner is responsible for inspiring more teenagers to become mathematicians than anyone else, by an order of magnitude. If you don’t know of Martin Gardner then stop reading and go buy this book. Now!

Evelyn Lamb is not Martin Gardner. No one is. But she is very good. Lamb writes the mathematics blog Roots of Unity for Scientific American, and her posts are often surprising, always interesting, and very well written.

That is all by way of introduction to a lovely post that Lamb wrote last week in honour of Julia Robinson, who would have turned 100 on December 8. That is Robinson in the photo above. Robinson’s is one of the great, and very sad, stories of 20th century mathematics.

Robinson worked on Diophantine equations, polynomial equations with integer coefficients and where we’re hunting for integer solutions. So, for example, the equation x2 + y2 = z2 is Diophantine with the integer solution (3,4,5), as well as many others. By contrast, the Diophantine equation x2 + y2 = 3 clearly has no integer solutions.

Robinson did groundbreaking work on Hilbert’s 10th problem, which asks if there exists an algorithm to determine whether a Diophantine equation has (integer) solutions. Robinson was unable to solve Hilbert’s problem. In 1970, however, building on the work of Robinson and her collaborators, the Russian mathematician Yuri Matiyasevich was able solve the problem in the negative: no such algorithm exists. And the magic key that allowed Matiyasevich to complete Robinson’s work was … wait for it … Fibonacci numbers.

Label the Fibonacci numbers as follows:

F1 =1, F2 = 1, F3 = 2, F4 = 3, F5 = 5, F6 = 8, …

It turns out that with this labelling the Fibonacci numbers have the following weird property:

If Fn2 divides Fm then Fn divides m.

You can check what this is saying with n = 3 and m = 6. (We haven’t been able to find a proof online to which to link.) How does that help solve Hilbert’s problem? Read Lamb’s post, and her more bio-ish article in Science News, and find out.

The Super-Rigging of Gambling

Last week, the ABC set to bashing bet365, bringing to light some of the huge betting company’s unsavoury practices. To which we respond, “Well done”. And, “Well, duh”.

The ABC noted a number of dodgy tactics employed by bet365, writing it all up as astonishing revelation. Perhaps the ABC reporters and their cloistered readers were astonished, but many Australian gamblers would have simply yawned. All gambling companies employ similar tactics and they’ve always done it. It is not new and it is not news. It is all part of the standard super-rigging of gambling.

To begin, it is no secret that gambling is rigged; even bad gamblers know that the odds are stacked against them. Mathematically, the rigging of a game is expressed in terms of expectation. In a fair game the average or “expected” win is zero. For example, flipping a coin in the natural win-lose manner is fair. By comparison, roulette has 37 possible outcomes but the payouts are calculated as if there were only 36 numbers. (The payout is “even money” if you bet on “red” or “black”, and the payout is “35 to 1” if you bet on a number.) This implies that the average loss per spin on roulette is 1/37 of the amount bet, or an expectation of about -3%. The expectation being negative indicates the rigging.

Given that gambling institutions intend to offer only rigged, negative expectation games, what can punters do about it? Lots, and not much. They can cheat, of course. Or, they can be become experts on horses or golfers or whatever. Or, they can look for mechanical or human flaws. There’s a surprising number of avenues to explore as well as, of course, many dead ends. (To illustrate the subtlety, we’ve included a few gambling puzzles at the end of the post.) Finding and exploiting opportunities, however, takes work and/or sophistication and/or capital. There’s lunch there, but it’s not free.

So, as a general rule, punters are left with only losing games to play. But how, then, does a gambling site entice a punter to play a game of negative expectation?

Yes, it’s a stupid question. Obviously there’s no shortage of punters willing to bet on appallingly bad games. But, if you run a gambling site, the real question is how to get the punter to gamble on your site. And that’s where one form of super-rigging begins. Super-rigging is making a betting opportunity appear better than it is. This is built in to the way poker machines work, and betting sites do it as a matter of routine.

Betting sites have various ways of enticing punters. To begin, there are sign-up bonuses. So, for example, you might sign up with a $200 deposit and the site will throw in $100 of “free bets”. That’s akin to signing up for ten sessions at a gym and getting a few “free” lessons chucked in. It’s basically fine, with what you see being pretty much what you get. After that, however, there are innumerable betting “promotions”, many blasting out from the TV and destroying everyone’s enjoyment of the footy. (Unless you’re a Saints fan, in which case any distraction from the actual game is considered a plus.)

The effect of gambling promotions is to change the expectation of the bets. For example, a very common offer is “money back” if the punter bets on a horse and that horse comes 2nd or 3rd. (That “money back” is most commonly in the form of a “free bet” equal to the size of the original wager, which is an important distinction but one we can ignore here.) Then, given a good horse may have, say, a 30% chance of coming 2nd or 3rd, an expectation of about -10% may become an expectation of about +20%. There’s no guarantee of winning on that race, of course, but it’s now a sensible bet. These promotions are obviously attractive to punters.

How do the betting sites avoid losing a ton of money on these promotions? Often they don’t have to do much of anything. To begin, most promotions will come with a relatively small maximum bet size, of $50 or so; this is fair enough, just the same as Coles limiting some sale item to “five per customer”. Beyond that, the promotion can be pretty much what it appears to be, in itself a loser for the company but good advertising to get the punters onto the site to bet further. But there are also traps and nasty tricks.

First of all, betting promotions vary dramatically in value, with more than a few being close to worthless. They can be analogous to Motor Heaven blaring that a car is “50% off”, after having doubled the price the previous week. Secondly, even valuable promotions can be used poorly. The horse promotion above, for example, would be essentially worthless if used to bet on a massive favourite or a sluggish also-ran. Again, one might compare this to a commercial situation, say Harvey Norman giving $10 off on any one item in the store and someone using that offer when buying an overpriced TV.

Amidst all the noise, however, there are many good promotions that can create positive expectation on small bets when used intelligently. So, what happens then? Then what happens is what the ABC story is all about.

The gambling sites simply nobble any punter who is not a loser, in any manner they can: they will refuse to offer the promotions; they will limit the size of bets to approximately zero; they will lower the odds. What does that leave? It leaves the betting sites screaming out their offers, everywhere. But, any gambler who is halfway successful is banned from their offers, if not entirely.

And that is the super-rigging. The betting sites pretend they are offering positive expectation, but they will only continue that offer for people who use the offer in a useless manner. And, unlike the other aspects we have mentioned, such nasty practice has no commercial analogy that anyone would regard as acceptable. Imagine going into Harvey Norman and being shoved out the door, with some thug yelling “You only buy items on special, so we don’t want you here”. It is unthinkable at Harvey Norman but, in the context of gambling, it is universal.

How can the betting sites get away with this nastiness? Because the ACCC, the federal body responsible for overseeing and enforcing consumer law, is all bark and no bite. And, because the state governments and government regulators only care about whether they’re getting their cut of the loot.

It is obscene. And, as we indicated, none of it is news.

 

PUZZLES

Here are three gambling puzzles. If you are familiar with the puzzles and are sure you already know the answers, then please refrain from commenting for a while, leaving others free to think about them.

Puzzle 1. You are gambling on roulette, which has 18 red numbers, 18 black numbers and 1 green number (the zero). You watch the wheel spin and the ball lands on a red number. What colour should you bet on next, red or black? Or, doesn’t it matter?

Puzzle 2. A casino gives you a free bet of $10. You can place the bet on any standard casino game, or on a horse, or whatever. If the bet wins, you get your winnings as usual. (For example, if you bet “red” on roulette and win, you’d win $10.) Win or lose, the casino keeps the coupon. How much is the free bet worth?

Puzzle 3. You have found a betting game with positive expectation; it’s win-lose (like betting on “red” or “black” in roulette), but you have a 55% chance of winning and only a 45% chance of losing. You start with $1000 and hope to double your money. What is the probability that you will succeed before losing your $1000?

The NAPLAN Numeracy Test Test

The NAPLAN Numeracy Test Test is intended for education academics and education reporters. The test consists of three questions:

Q1. Are you aware that “numeracy”, to the extent that it is anything, is different from arithmetic and much less than solid school mathematics?

Q2. Do you regard it important to note and to clarify these distinctions?

Q3. Are you aware of the poverty in NAPLAN testing numeracy rather than mathematics?

The test is simple, and the test is routinely failed. NAPLAN is routinely represented as testing the “basics”, which is simply false. As a consequence, the interminable conflict between “inquiry” and “basics” has been distorted beyond sense. (A related and similarly distorting falsity is the representation of current school mathematics texts as “traditional”.) This framing of NAPLAN leaves no room for the plague-on-both-houses disdain which, we’d argue, is the only reasonable position.

Most recently this test was failed, and dismally so, by the writers of the Interim Report on NAPLAN, which was prepared for the state NSW government and was released last week. The Interim Report is short, its purpose being to prepare the foundations for the final report to come, to “set out the major concerns about NAPLAN that we have heard or already knew about from our own work and [to] offer some preliminary thinking”. The writers may have set out to do this, but either they haven’t been hearing or they haven’t been listening.

The Interim Report considers a number of familiar and contentious aspects of NAPLAN: delays in reporting, teaching to the test, misuse of test results, and so on. Mostly reasonable concerns, but what about the tests themselves, what about concerns over what the tests are testing? Surely the tests’ content is central? On this, however, at least before limited correction, the Report implies that there are no concerns whatsoever.

The main section of the Report is titled Current concerns about NAPLAN, which begins with a subsection titled Deficiencies in tests. This subsection contains just two paragraphs. The first paragraph raises the issue that a test such as NAPLAN “will” contain questions that are so easy or so difficult that little information is gained by including them. However, “Prior experimental work by ACARA [the implementers of NAPLAN] showed that this should be so.” In other words, the writers are saying “If you think ACARA got it wrong then you’re wrong, because ACARA told us they got it right”. That’s just the way one wishes a review to begin, with a bunch of yes men parroting the organisation whose work they are supposed to be reviewing. But, let’s not dwell on it; the second paragraph is worse.

The second “deficiencies” paragraph is concerned with the writing tests. Except it isn’t; it is merely concerned with the effect of moving NAPLAN online to the analysis of students’ tests. There’s not a word on the content of the tests. True, in a later, “Initial thinking” section the writers have an extended discussion about issues with the writing tests. But why are these issues not front and centre? Still, it is not our area and so we’ll leave it, comfortable in our belief that ACARA is mucking up literacy testing and will continue to do so.

And that’s it for “deficiencies in tests”, without a single word about suggested or actual deficiencies of the numeracy tests. Anywhere. Moreover, the term “arithmetic” never appears in the Report, and the word “mathematics” appears just once, as a semi-synonym for numeracy: the writers echo a suggested deficiency of NAPLAN, that one effect of the tests may be to “reduce the curriculum, particularly in primary schools, to a focus on literacy/English and numeracy/mathematics …”. One can only wish it were true.

How did this happen? The writers boast of having held about thirty meetings in a four-day period and having met with about sixty individuals. Could it possibly be the case that not one of those sixty individuals raised the issue that numeracy might be an educational fraud? Not a single person?

The short answer is “yes”. It is possible that the Report writers were warned that “numeracy” is snake oil and that testing it is a foolish distraction, with the writers then, consciously or unconsciously, simply filtering out that opinion. But it is also entirely possible that the writers heard no dissenting voice. Who did the writers choose to meet? How were those people chosen? Was the selection dominated by the predictable maths ed clowns and government hacks? Was there consultation with a single competent and attuned mathematician? It is not difficult to guess the answers.

The writers have failed the test, and the result of that failure is clear. The Interim Report is nonsense, setting the stage for a woefully misguided review that in all probability will leave the ridiculous NAPLAN numeracy tests still firmly in place and still just as ridiculous.