Following our discussion with Charlie, we sent a short but strong letter to WA’s School Curriculum Standards Authority, criticising one specific question and suggesting our (and some others’) general concerns. Their polite fobbing off indicated that our comments regarding the particular question “will be looked into”. Generally on the exam, they responded: “Feedback from teachers and candidates indicates the examination was well received and that the examination was fair, valid and based on the syllabus.” The reader can make of that what they will.
Determine the errors, ambiguities and sillinesses in the 2017 WA Applications Exam, Part 1 and Part 2. (Here, also, is the Summary Exam Report. Unfortunately, and ridiculously, the full report and the grading scheme are not made public, and so cannot be part of the competition.)
Post any identified issues in the comments below (anonymously, if you wish). You may post more than once, particularly on different questions, but please don’t edit on the run with post updates and comments to your own posts. You may (politely) comment on and seek to clarify others’ comments.
This post will be updated below, as the issues (or lack thereof) with particular questions are sorted out.
Entry is of course free (though you could always donate to Tenderfeet).
First prize, a signed copy of A Dingo Ate My Math Book, goes to the person who makes the most original and most valuable contributions.
Consolation prizes of Burkard’s QED will be awarded as deemed appropriate.
Rushed and self-appended contributions will be marked down!
This is obviously subjective as all Hell, and Marty’s decision will be final.
Charlie, Paul, Burkard, Anthony, Joseph, David and other fellow travellers are ineligible to enter.
Employees of SCSA are eligible to enter, since there’s no indication they have any chance of winning.
All correspondence will be entered into.
Well that worked well. Congratulations to Number 8, who wins by default. Details are here. We’ll attempt another competition, of hopefully broader interest, in the near future.
Our second post on the 2017 VCE exam madness concerns a question on the first Specialist Mathematics exam. Typically Specialist exams, particularly the first exams, don’t go too far off the rails; it’s usually more “meh” than madness. (Not that “meh” is an overwhelming endorsement of what is nominally a special mathematics subject.) This year, however, the Specialist exams have some notably Methodsy bits. The following nonsense was pointed out to us by John, a friend and colleague.
The final question, Question 10, on the first Specialist exam concerns the function , on its maximal domain [-2,2]. In part (c), students are asked to determine the volume of the solid of revolution formed when the region under the graph of f is rotated around the x-axis. This leads to the integral
Students don’t have their stupifying CAS machines in this first exam, so how to do the integral? It is natural to consider integration by parts, but unfortunately this standard and powerful technique is no longer part of the VCE curriculum. (Why not? You’ll have to ask the clowns at ACARA and the VCAA.)
No matter. The VCAA examiners love to have the students to go through a faux-parts computation. So, in part (a) of the question, students are asked to check the derivative of . Setting a = 2 in the resulting equation, this gives
We can now integrate and rearrange, giving
So, all that remains is to do that last integral, and … uh oh.
It is easy to integrate indefinitely by substitution, but the problem is that our definite(ish) integral is improper at both endpoints. And, unfortunately, improper integrals are not part of the VCE curriculum. (Why not? You’ll have to ask the clowns at ACARA and the VCAA.) Moreover, even if improper integrals were available, the double improperness is fiddly: we are not permitted to simply integrate from some –b to b and then let b tend to 2.
So, what is a Specialist student to do? One can hope to argue that the integral is zero by odd symmetry, but the improperness is again an issue. As an example indicating the difficulty, the integral is not equal to 0. (The TI Inspire falsely computes the integral to be 0, which is less than inspiring.) Any argument which arrives at the answer 0 for integrating is invalid, and is thus prima facie invalid for integrating as well.
Now, in fact is equal to zero, and so . In particular, it is possible to argue that the fatal problem with does not occur for our integral, and so both the substitution and symmetry approaches can be made to work. The argument, however, is subtle, well beyond what is expected in a Specialist course.
Note also that this improperness could have been avoided, with no harm to the question, simply by taking the original domain to be, for example, [-1,1]. Which was exactly the approach taken on Question 5 of the 2017 Northern Hemisphere Specialist Exam 1. God knows why it wasn’t done here, but it wasn’t and the consequently the examiners have trouble ahead.
The blunt fact is, Specialist students cannot validly compute with any technique they would have seen in a standard Specialist class. They must either argue incompletely by symmetry or ride roughshod over the improperness. The Examiners’ Report will be a while coming out, though presumably the examiners will accept either argument. But here is a safe prediction: the Report will either contain mealy-mouthed nonsense or blatant mathematical falsehoods. The only alternative is for the examiners to make a clear admission that they stuffed up. Which won’t happen.
Finally, the irony. Look again at the original integral for V. Though this integral arose in the calculation of a volume, it can still be interpreted as the area under the graph of the function y = arccos(x/2):
But now we can consider the corresponding area under the inverse function y = 2cos(x):
It follows that
This inverse function trick is standard for Specialist (and Methods) students, and so the students can readily calculate the volume V in this manner. True, reinterpreting the integral for V as an area is a sharp conceptual shift, but with appropriate wording it could have made for a very good Specialist question.
In summary, the Specialist Examiners guided the students to calculate V with a jerry-built technique, leading to an integral that the students cannot validly compute, all the while avoiding a simpler approach well within the students’ grasp. Well played, Examiners, well played.
Yes, we’ve used that title before, but it’s a damn good title. And there is so much madness in Mathematical Methods to cover. And not only Methods. Victoria’s VCE exams are coming to an end, the maths exams are done, and there is all manner of new and astonishing nonsense to consider. This year, the Victorian Curriculum and Assessment Authority have outdone themselves.
Over the next week we’ll put up a series of posts on significant errors in the 2017 Methods, Specialist Maths and Further Maths exams, including in the mid-year Northern Hemisphere exams. By “significant error” we mean more than just a pointless exercise in button-pushing, or tone-deaf wording, or idiotic pseudomodelling, or aimless pedantry, all of which is endemic in VCE maths exams. A “significant error” in an exam question refers to a fundamental mathematical flaw with the phrasing, or with the intended answer, or with the (presumed or stated) method that students were supposed to use. Not all the errors that we shall discuss are large, but they are all definite errors, they are errors that would have (or at least should have) misled some students, and none of these errors should have occurred. (It is courtesy of diligent (and very annoyed) maths teachers that I learned of most of these questions.) Once we’ve documented the errors, we’ll post on the reasons that the errors are so prevalent, on the pedagogical and administrative climate that permits and encourages them.
Our first post concerns Exam 1 of Mathematical Methods. In the final question, Question 9, students consider the function on the closed interval [0,1], pictured below. In part (b), students are required to show that, on the open interval (0,1), “the gradient of the tangent to the graph of f” is . A clumsy combination of calculation and interpretation, but ok. The problem comes when students then have to consider tangents to the graph.
In part (c), students take the angle θ in the picture to be 45 degrees. The pictured tangents then have slopes 1 and -1, and the students are required to find the equations of these two tangents. And therein lies the problem: it turns out that the “derivative” of f is equal to -1 at the endpoint x = 1. However, though the natural domain of the function is [0,∞), the students are explicitly told that the domain of f is [0,1].
This is obvious and unmitigated madness.
Before we hammer the madness, however, let’s clarify the underlying mathematics.
Does the derivative/tangent of a suitably nice function exist at an endpoint? It depends upon who you ask. If the “derivative” is to exist then the standard “first principles” definition must be modified to be a one-sided limit. So, for our function f above, we would define
This is clearly not too difficult to do, and with this definition we find that f'(1) = -1, as implied by the Exam question. (Note that since f naturally extends to the right of x =1, the actual limit computation can be circumvented.) However, and this is the fundamental point, not everyone does this.
At the university level it is common, though far from universal, to permit differentiability at the endpoints. (The corresponding definition of continuity on a closed intervalis essentially universal, at least after first year.) At the school level, however, the waters are much muddier. The VCE curriculum and the most popular and most respected Methods textbook appear to be completely silent on the issue. (This textbook also totally garbles the related issue of derivatives of piecewise defined (“hybrid”) functions.) We suspect that the vast majority of Methods teachers are similarly silent, and that the minority of teachers who do raise the issue would not in general permit differentiability at an endpoint.
In summary, it is perfectly acceptable to permit derivatives/tangents to graphs at their endpoints, and it is perfectly acceptable to proscribe them. It is also perfectly acceptable, at least at the school level, to avoid the issue entirely, as is done in the VCE curriculum, by most teachers and, in particular, in part (b) of the Exam question above.
What is blatantly unacceptable is for the VCAA examiners to spring a completely gratuitous endpoint derivative on students when the issue has never been raised. And what is pure and unadulterated madness is to spring an endpoint derivative after carefully and explicitly avoiding it on the immediately previous part of the question.
The Victorian Curriculum and Assessment Authority has a long tradition of scoring own goals. The question above, however, is spectacular. Here, the VCAA is like a goalkeeper grasping the ball firmly in both hands, taking careful aim, and flinging the ball into his own net.
In Q9(b), students were asked to show that the derivative of is . as we noted, the question was pointlessly verbose in classic VCAA style, but no big deal; an easy 1-mark question. What could go wrong?
Well, what went wrong is that 2/3 of students scored 0/1 on this very easy question. How? The Examination Report explains:
When answering ‘show that’ questions, students should include all steps to demonstrate exactly what was done, but many students often left steps out. A common pattern was to go straight from the first line of differentiation immediately to the final line, with no indication of obtaining a common denominator.
For fuck’s sake.
The stark incompetence of VCAA is often stunning. And, the nasty, meaningless pedantry of the VCAA is often stunning. But, on a question like this, when you see the two in seamless combination, that’s when you realise that you’re in the presence of true greatness.
Each year about a million Australian school students are required to sit the Government’s NAPLAN tests. Produced by ACARA, the same outfit responsible for the stunning Australian Curriculum, these tests are expensive, annoying and pointless. In particular it is ridiculous for students to sit a numeracytest, rather than a test on arithmetic or more broadly on mathematics. It guarantees that the general focus will be wrong and that specific weirdnesses will abound. The 2017 NAPLAN tests, conducted last week, have not disappointed. Today, however, we have other concerns.
Wading into NAPLAN’s numeracy quagmire, one can often find a nugget or two of glowing wrongness. Here is a question from the 2017 Year 9 test:
In this inequality n is a whole number.
What is the smallest possible value for n to make this inequality true?
The wording is appalling, classic NAPLAN. They could have simply asked:
What is the smallest whole number n for which
But of course the convoluted wording is the least of our concerns. The fundamental problem is that the use of the expression “whole number” is disastrous.
Mathematicians would avoid the expression “whole number”, but if pressed would most likely consider it a synonym for “integer”, as is done in the Australian Curriculum (scroll down) and some dictionaries. With this interpretation, where the negative integers are included, the above NAPLAN question obviously has no solution. Sometimes, including in, um, the Australian Curriculum (scroll down), “whole number” is used to refer to only the nonnegative integers or, rarely, to only the positive integers. With either of these interpretations the NAPLAN question is pretty nice, with a solution n = 10. But it remains the case that, at best, the expression “whole number” is irretrievably ambiguous and the NAPLAN question is fatally flawed.
Pointing out an error in a NAPLAN test is like pointing out one of Donald Trump’s lies: you feel you must, but doing so inevitably distracts from the overall climate of nonsense and nastiness. Still, one can hope that ACARA will be called on this, will publicly admit that they stuffed up, and will consider employing a competent mathematician to vet future questions. Unfortunately, ACARA is just about as inviting of criticism and as open to admitting error as Donald Trump.
Our first post concerns an error in the 2016 Mathematical Methods Exam 2 (year 12 in Victoria, Australia). It is not close to the silliest mathematics we’ve come across, and not even the silliest error to occur in a Methods exam. Indeed, most Methods exams are riddled with nonsense. For several reasons, however, whacking this particular error is a good way to begin: the error occurs in a recent and important exam; the error is pretty dumb; it took a special effort to make the error; and the subsequent handling of the error demonstrates the fundamental (lack of) character of the Victorian Curriculum and Assessment Authority.
The problem, first pointed out to us by teacher and friend John Kermond, is in Section B of the exam and concerns Question 3(h)(ii). This question relates to a probability distribution with “probability density function”
Now, anyone with a good nose for calculus is going to be thinking “uh-oh”. It is a fundamental property of a PDF that the total integral (underlying area) should equal 1. But how are all those integrated powers of e going to cancel out? Well, they don’t. What has been defined is only approximately a PDF, with a total area of . (It is easy to calculate the area exactly using integration by parts.)
Below we’ll discuss the absurdity of handing students a non-PDF, but back to the exam question. 3(h)(ii) asks the students to find the median of the “probability distribution”, correct to two decimal places. Since the question makes no sense for a non-PDF, of course the VCAA have shot themself in the foot. However, we can still attempt to make some sense of the question, which is when we discover that the VCAA has also shot themself in the other foot.
The median m of a probability distribution is the half-way point. So, in the integration context here we want the m for which
As such, this question was intended to be just another CAS exercise, and so both trivial and pointless: push the button, write down the answer and on to the next question. The problem is, the median can also be determined by the equation
or by the equation
And, since our function is only approximately a PDF, these three equations necessarily give three different answers: to the demanded two decimal places the answers are respectively 176.45, 176.43 and 176.44. Doh!
What to make of this? There are two obvious questions.
1. How did the VCAA end up with a PDF which isn’t a PDF?
It would be astonishing if all of the exam’s writers and checkers failed to notice the integral was not 1. It is even more astonishing if all the writers-checkers recognised and were comfortable with a non-PDF. Especially since the VCAA can be notoriously, absurdly fussy about the form and precision of answers (see below).
2. How was the error in 3(h)(ii) not detected?
It should have been routine for this mistake to have been detected and corrected with any decent vetting. Yes, we all make mistakes. Mistakes in very important exams, however, should not be so common, and the VCAA seems to make a habit of it.
OK, so the VCAA stuffed up. It happens. What happened next? That’s where the VCAA’s arrogance and cowardice shine bright for all to see. The one and only sentence in the Examiners’ Report that remotely addresses the error is:
“As [the] function f is a close approximation of the [???] probability density function, answers to the nearest integer were accepted”.
The wording is clumsy, and no concession has been made that the best (and uniquely correct) answer is “The question is stuffed up”, but it seems that solutions to all of a), b) and c) above were accepted. The problem, however, isn’t with the grading of the question.
It is perhaps too much to expect an insufferably arrogant VCAA to apologise, to express anything approximating regret for yet another error. But how could the VCAA fail to understand the necessity of a clear and explicit acknowledgement of the error? Apart from demonstrating total gutlessness, it is fundamentally unprofessional. How are students and teachers, especially new teachers, supposed to read the exam question and report? How are students and teachers supposed to approach such questions in the future? Are they still expected to employ the precise definitions that they have learned? Or, are they supposed to now presume that near enough is good enough?
For a pompous finale, the Examiners’ Report follows up by snarking that, in writing the integral for the PDF, “The dx was often missing from students’ working”. One would have thought that the examiners might have dispensed with their finely honed prissiness for that one paragraph. But no. For some clowns it’s never the wrong time to whine about a missing dx.
UPDATE (16 June): In the comments below, Terry Mills has made the excellent point that the prior question on the exam is similarly problematic. 3(h)(i) asks students to calculate the mean of the probability distribution, which would normally be calculated as . For our non-PDF, however, we should should normalise by dividing by . To the demanded two decimal places, that changes the answer from the Examiners’ Report’s 170.01 to 170.06.
UPDATE (05/07/22): The examination report was updated on 18/07/20, and now (mostly) fesses up to the nonsense in 3(h)(ii). There is still no submission for the parallel nonsense in 3(h)(i).