This year Australia celebrates ten years of NAPLAN testing, and Australians can ponder the results. Numerous media outlets have reported “a 2.55% increase in numeracy” over the ten years. This is accompanied by a 400% increase in the unintended irony of Australian education journalism.
What is the origin of that 2.55% and precisely what does it mean to have “an increase in numeracy” by that amount? Yes, yes, it clearly means “bugger all”, but bugger all of what? It is a safe bet that no one reporting the percentage has a clue, and it is not easy to determine.
The media appear to have taken the percentage from a media release from Simon Birmingham, the Federal Education and Training Minister. (Birmingham, it should be noted, is one of the better ministers in the loathsome Liberal government; he is merely hopeless rather than malevolent.) Attempting to decipher that 2.55%, it seems to refer to the “% average change in NAPLAN mean scale score [from 2008 to 2017], average for domains across year levels”. Whatever that means.
ACARA, the administrators of NAPLAN, issued their own media release on the 2017 NAPLAN results. This release does not quote any percentages but indicates that the “2107 summary information” can be found at the the NAPLAN reports page. Two weeks after ACARA’s media release, no such information is contained on or linked on that page, nor on the page titled NAPLAN 2017 summary results. Both pages link to a glossary, to explain “mean scale score”, which in turn explains nothing. The 2016 NAPLAN National Report contains the expression 207 times, without once even pretending to explain what it means. The 609-page Technical Report from 2015 (the latest available on ACARA’s website) appears to contain the explanation, though the precise expression is never used and nothing remotely resembling a user-friendly summary is included.
To put it very briefly, each student’s submitted test is given a “scaled score”. One purpose of this is to be able to compare tests and test scores from different years. The statistical process is massively complicated and in particular it includes a weighting for the “difficulty” of each test question. There is plenty that could be queried here, particularly given ACARA’s peculiar habit of including test questions that are so difficult they can’t be answered. But, for now, we’ll accept those scaled scores as a thing. Then, for example, the national average for 2008 Year 3 numeracy scaled scores was 396.9. This increased to 402.0 in 2016, amounting to a percentage increase of 1.29%. The average percentage increases from 2008 to 2017 can then be further averaged over the four year levels, and (we think) this results in that magical 2.55%.
It is anybody’s guess whether that “2.55% increase in numeracy” corresponds to anything real, but the reporting of the figure is simply hilarious. Numeracy, to the very little extent it means anything, refers to the ability to apply mathematics effectively in the real world. To then report on numeracy in such a manner, with a who-the hell-cares free-floating percentage is beyond ironic; it’s perfect.
But of course the stenographic reportage is just a side issue. The main point is that there is no evidence that ten years of NAPLAN testing, and ten years of shoving numeracy down teachers’ and students’ throats, has made one iota of difference.