NAPLAN’s Numerological Numeracy

This year Australia celebrates ten years of NAPLAN testing, and Australians can ponder the results. Numerous media outlets have reported “a 2.55% increase in numeracy” over the ten years. This is accompanied by a 400% increase in the unintended irony of Australian education journalism.

What is the origin of that 2.55% and precisely what does it mean to have “an increase in numeracy” by that amount? Yes, yes, it clearly means “bugger all”, but bugger all of what? It is a safe bet that no one reporting the percentage has a clue, and it is not easy to determine.

The media appear to have taken the percentage from a media release from Simon Birmingham, the Federal Education and Training Minister. (Birmingham, it should be noted, is one of the better ministers in the loathsome Liberal government; he is merely hopeless rather than malevolent.) Attempting to decipher that 2.55%, it seems to refer to the “% average change in NAPLAN mean scale score [from 2008 to 2017], average for domains across year levels”. Whatever that means.

ACARA, the administrators of NAPLAN, issued their own media release on the 2017 NAPLAN results. This release does not quote any percentages but indicates that the “2107 summary information” can be found at the the NAPLAN reports page. Two weeks after ACARA’s media release, no such information is contained on or linked on that page, nor on the page titled NAPLAN 2017 summary results. Both pages link to a glossary, to explain “mean scale score”, which in turn explains nothing. The 2016 NAPLAN National Report contains the expression 207 times, without once even pretending to explain what it means. The 609-page Technical Report from 2015 (the latest available on ACARA’s website) appears to contain the explanation, though the precise expression is never used and nothing remotely resembling a user-friendly summary is included.

To put it very briefly, each student’s submitted test is given a “scaled score”. One purpose of this is to be able to compare tests and test scores from different years. The statistical process is massively complicated and in particular it includes a weighting for the “difficulty” of each test question. There is plenty that could be queried here, particularly given ACARA’s peculiar habit of including test questions that are so difficult they can’t be answered. But, for now, we’ll accept those scaled scores as a thing. Then, for example, the national average for 2008 Year 3 numeracy scaled scores was 396.9. This increased to 402.0 in 2016, amounting to a percentage increase of 1.29%. The average percentage increases from 2008 to 2017 can then be further averaged over the four year levels, and (we think) this results in that magical 2.55%.

It is anybody’s guess whether that “2.55% increase in numeracy” corresponds to anything real, but the reporting of the figure is simply hilarious. Numeracy, to the very little extent it means anything, refers to the ability to apply mathematics effectively in the real world. To then report on numeracy in such a manner, with a who-the hell-cares free-floating percentage is beyond ironic; it’s perfect.

But of course the stenographic reportage is just a side issue. The main point is that there is no evidence that ten years of NAPLAN testing, and ten years of shoving numeracy down teachers’ and students’ throats, has made one iota of difference.

NAPLAN’s Mathematical Nonsense, and What it Means for Rural Peru

The following question appeared on Australia’s Year 9 NAPLAN Numeracy Test in 2009:

y = 2x – 1

y = 3x + 2

Which value of x satisfies both of these equations?

It is a multiple choice question, but unfortunately “The question is completely stuffed” is not one of the available answers.

Of course the fundamental issue with simultaneous equations is the simultaneity. Both equations and both variables must be considered as a whole, it simply making no sense to talk about solutions for x without reference to y. Unless y = -7 in the above equations, and there is no reason to assume that, then no value of x satisfies both equations. The NAPLAN question is way beyond bad.

It is always worthwhile pointing out NAPLAN nonsense, as we’ve done before and will continue to do in the future. But what does this have to do with rural Peru?

In a recent post we pointed out an appalling question from a nationwide mathematics exam in New Zealand. We flippantly remarked that one might expect such nonsense in rural Peru but not in a wealthy Western country such as New Zealand. We were then gently slapped in the comments for the Peruvian references: Josh queried whether we knew anything of Peru’s educational system; and, Dennis questioned the purpose of bringing up Peru, since Australia’s NAPLAN demonstrates a “level of stupidity” for all the World to see. These are valid points.

It would have been prudent to have found out a little about Peru before posting, but we seem to be safe. Peru’s economy has been growing rapidly but is not nearly as strong as New Zealand’s or Australia’s. Peruvian school education is weak, and Peru seems to have no universities comparable to the very good universities in New Zealand and Australia. Life and learning in rural Peru appears to be pretty tough.

None of this is surprising, and none of it particularly matters. Our blog post referred to “rural Peru or wherever”. The point was that we can expect poorer education systems to throw up nonsense now and then, or even typically; in particular, lacking ready access to good and unharried mathematicians, it is unsurprising if exams and such are mathematically poor and error-prone.

But what could possibly be New Zealand’s excuse for that idiotic question? Even if the maths ed crowd didn’t know what they were doing, there is simply no way that a competent mathematician would have permitted that question to remain as is, and there are plenty of excellent mathematicians in New Zealand. How did a national exam in New Zealand fail to be properly vetted? Where were the mathematicians?

Which brings us to Australia and to NAPLAN. How could the ridiculous problem at the top of this post, or the question discussed here, make it into a nationwide test? Once again: where were the mathematicians?

One more point. When giving NAPLAN a thoroughly deserved whack, Dennis was not referring to blatantly ill-formed problems of the type above, but rather to a systemic and much more worrying issue. Dennis noted that NAPLAN doesn’t offer a mathematics test or an arithmetic test, but rather a numeracy test. Numeracy is pedagogical garbage and in the true spirit of numeracy, NAPLAN’s tests include no meaningful evaluation of arithmetic or algebraic skills. And, since we’re doing the Peru thing, it seems worth noting that numeracy is undoubtedly a first world disease. It is difficult to imagine a poorer country, one which must weigh every educational dollar and every educational hour, spending much time on numeracy bullshit.

Finally, a general note about this blog. It would be simple to write amusing little posts about this or that bit of nonsense in, um, rural Peru or wherever. That, however, is not the purpose of this blog. We have no intention of making easy fun of people or institutions honestly struggling in difficult circumstances; that includes the vast majority of Australian teachers, who have to tolerate and attempt to make sense of all manner of nonsense flung at them from on high. Our purpose is to point out the specific idiocies of arrogant, well-funded educational authorities that have no excuse for screwing up in the manner in which they so often do.

Accentuate the Negative

Each year about a million Australian school students are required to sit the Government’s NAPLAN tests. Produced by ACARA, the same outfit responsible for the stunning Australian Curriculum, these tests are expensive, annoying and pointless. In particular it is ridiculous for students to sit a numeracy test, rather than a test on arithmetic or more broadly on mathematics. It guarantees that the general focus will be wrong and that specific weirdnesses will abound. The 2017 NAPLAN tests, conducted last week, have not disappointed. Today, however, we have other concerns.

Wading into NAPLAN’s numeracy quagmire, one can often find a nugget or two of glowing wrongness. Here is a question from the 2017 Year 9 test:

In this inequality is a whole number.

\color{blue} \dfrac7{n} \boldsymbol{<} \dfrac57

What is the smallest possible value for n to make this inequality true?

The wording is appalling, classic NAPLAN. They could have simply asked:

What is the smallest whole number n for which \color{red} \dfrac7{n} \boldsymbol{<} \dfrac57\, ?

But of course the convoluted wording is the least of our concerns. The fundamental problem is that the use of the expression “whole number” is disastrous.

Mathematicians would avoid the expression “whole number”, but if pressed would most likely consider it a synonym for “integer”, as is done in the Australian Curriculum (scroll down) and some dictionaries. With this interpretation, where the negative integers are included, the above NAPLAN question obviously has no solution. Sometimes, including in, um, the Australian Curriculum (scroll down), “whole number” is used to refer to only the nonnegative integers or, rarely, to only the positive integers. With either of these interpretations the NAPLAN question is pretty nice, with a solution n = 10. But it remains the case that, at best, the expression “whole number” is irretrievably ambiguous and the NAPLAN question is fatally flawed.

Pointing out an error in a NAPLAN test is like pointing out one of Donald Trump’s lies: you feel you must, but doing so inevitably distracts from the overall climate of nonsense and nastiness. Still, one can hope that ACARA will be called on this, will publicly admit that they stuffed up, and will consider employing a competent mathematician to vet future questions. Unfortunately, ACARA is just about as inviting of criticism and as open to admitting error as Donald Trump.