Each year about a million Australian school students are required to sit the Government’s NAPLAN tests. Produced by ACARA, the same outfit responsible for the stunning Australian Curriculum, these tests are expensive, annoying and pointless. In particular it is ridiculous for students to sit a numeracytest, rather than a test on arithmetic or more broadly on mathematics. It guarantees that the general focus will be wrong and that specific weirdnesses will abound. The 2017 NAPLAN tests, conducted last week, have not disappointed. Today, however, we have other concerns.
Wading into NAPLAN’s numeracy quagmire, one can often find a nugget or two of glowing wrongness. Here is a question from the 2017 Year 9 test:
In this inequality n is a whole number.
What is the smallest possible value for n to make this inequality true?
The wording is appalling, classic NAPLAN. They could have simply asked:
What is the smallest whole number n for which
But of course the convoluted wording is the least of our concerns. The fundamental problem is that the use of the expression “whole number” is disastrous.
Mathematicians would avoid the expression “whole number”, but if pressed would most likely consider it a synonym for “integer”, as is done in the Australian Curriculum (scroll down) and some dictionaries. With this interpretation, where the negative integers are included, the above NAPLAN question obviously has no solution. Sometimes, including in, um, the Australian Curriculum (scroll down), “whole number” is used to refer to only the nonnegative integers or, rarely, to only the positive integers. With either of these interpretations the NAPLAN question is pretty nice, with a solution n = 10. But it remains the case that, at best, the expression “whole number” is irretrievably ambiguous and the NAPLAN question is fatally flawed.
Pointing out an error in a NAPLAN test is like pointing out one of Donald Trump’s lies: you feel you must, but doing so inevitably distracts from the overall climate of nonsense and nastiness. Still, one can hope that ACARA will be called on this, will publicly admit that they stuffed up, and will consider employing a competent mathematician to vet future questions. Unfortunately, ACARA is just about as inviting of criticism and as open to admitting error as Donald Trump.
Our first post concerns an error in the 2016 Mathematical Methods Exam 2 (year 12 in Victoria, Australia). It is not close to the silliest mathematics we’ve come across, and not even the silliest error to occur in a Methods exam. Indeed, most Methods exams are riddled with nonsense. For several reasons, however, whacking this particular error is a good way to begin: the error occurs in a recent and important exam; the error is pretty dumb; it took a special effort to make the error; and the subsequent handling of the error demonstrates the fundamental (lack of) character of the Victorian Curriculum and Assessment Authority.
The problem, first pointed out to us by teacher and friend John Kermond, is in Section B of the exam and concerns Question 3(h)(ii). This question relates to a probability distribution with “probability density function”
Now, anyone with a good nose for calculus is going to be thinking “uh-oh”. It is a fundamental property of a PDF that the total integral (underlying area) should equal 1. But how are all those integrated powers of e going to cancel out? Well, they don’t. What has been defined is only approximately a PDF, with a total area of . (It is easy to calculate the area exactly using integration by parts.)
Below we’ll discuss the absurdity of handing students a non-PDF, but back to the exam question. 3(h)(ii) asks the students to find the median of the “probability distribution”, correct to two decimal places. Since the question makes no sense for a non-PDF, of course the VCAA have shot themself in the foot. However, we can still attempt to make some sense of the question, which is when we discover that the VCAA has also shot themself in the other foot.
The median m of a probability distribution is the half-way point. So, in the integration context here we want the m for which
As such, this question was intended to be just another CAS exercise, and so both trivial and pointless: push the button, write down the answer and on to the next question. The problem is, the median can also be determined by the equation
or by the equation
And, since our function is only approximately a PDF, these three equations necessarily give three different answers: to the demanded two decimal places the answers are respectively 176.45, 176.43 and 176.44. Doh!
What to make of this? There are two obvious questions.
1. How did the VCAA end up with a PDF which isn’t a PDF?
It would be astonishing if all of the exam’s writers and checkers failed to notice the integral was not 1. It is even more astonishing if all the writers-checkers recognised and were comfortable with a non-PDF. Especially since the VCAA can be notoriously, absurdly fussy about the form and precision of answers (see below).
2. How was the error in 3(h)(ii) not detected?
It should have been routine for this mistake to have been detected and corrected with any decent vetting. Yes, we all make mistakes. Mistakes in very important exams, however, should not be so common, and the VCAA seems to make a habit of it.
OK, so the VCAA stuffed up. It happens. What happened next? That’s where the VCAA’s arrogance and cowardice shine bright for all to see. The one and only sentence in the Examiners’ Report that remotely addresses the error is:
“As [the] function f is a close approximation of the [???] probability density function, answers to the nearest integer were accepted”.
The wording is clumsy, and no concession has been made that the best (and uniquely correct) answer is “The question is stuffed up”, but it seems that solutions to all of a), b) and c) above were accepted. The problem, however, isn’t with the grading of the question.
It is perhaps too much to expect an insufferably arrogant VCAA to apologise, to express anything approximating regret for yet another error. But how could the VCAA fail to understand the necessity of a clear and explicit acknowledgement of the error? Apart from demonstrating total gutlessness, it is fundamentally unprofessional. How are students and teachers, especially new teachers, supposed to read the exam question and report? How are students and teachers supposed to approach such questions in the future? Are they still expected to employ the precise definitions that they have learned? Or, are they supposed to now presume that near enough is good enough?
For a pompous finale, the Examiners’ Report follows up by snarking that, in writing the integral for the PDF, “The dx was often missing from students’ working”. One would have thought that the examiners might have dispensed with their finely honed prissiness for that one paragraph. But no. For some clowns it’s never the wrong time to whine about a missing dx.
UPDATE (16 June): In the comments below, Terry Mills has made the excellent point that the prior question on the exam is similarly problematic. 3(h)(i) asks students to calculate the mean of the probability distribution, which would normally be calculated as . For our non-PDF, however, we should should normalise by dividing by . To the demanded two decimal places, that changes the answer from the Examiners’ Report’s 170.01 to 170.06.