Yes, we’ve used that title before, but it’s a damn good title. And there is *so* much madness in Mathematical Methods to cover. And not only Methods. Victoria’s VCE exams are coming to an end, the maths exams are done, and there is all manner of new and astonishing nonsense to consider. This year, the Victorian Curriculum and Assessment Authority have outdone themselves.

Over the next week we’ll put up a series of posts on significant errors in the 2017 Methods, Specialist Maths and Further Maths exams, including in the mid-year Northern Hemisphere exams. By “significant error” we mean more than just a pointless exercise in button-pushing, or tone-deaf wording, or idiotic pseudomodelling, or aimless pedantry, all of which is endemic in VCE maths exams. A “significant error” in an exam question refers to a fundamental mathematical flaw with the phrasing, or with the intended answer, or with the (presumed or stated) method that students were supposed to use. Not all the errors that we shall discuss are large, but they are all definite errors, they are errors that would have (or at least should have) misled some students, and none of these errors should have occurred. (It is courtesy of diligent (and very annoyed) maths teachers that I learned of most of these questions.) Once we’ve documented the errors, we’ll post on the reasons that the errors are so prevalent, on the pedagogical and administrative climate that permits and encourages them.

Our first post concerns Exam 1 of Mathematical Methods. In the final question, Question 9, students consider the function on the **closed interval** [0,1], pictured below. In part (b), students are required to show that, on the **open interval** (0,1), “the gradient of the tangent to the graph of f” is . A clumsy combination of calculation and interpretation, but ok. The problem comes when students then have to consider tangents to the graph.

In part (c), students take the angle θ in the picture to be 45 degrees. The pictured tangents then have slopes 1 and -1, and the students are required to find the equations of these two tangents. And therein lies the problem: it turns out that the “derivative” of *f* is equal to -1 at the endpoint *x *= 1. However, though the natural domain of the function is [0,∞), the students are explicitly told that the domain of *f* is [0,1].

This is obvious and unmitigated madness.

Before we hammer the madness, however, let’s clarify the underlying mathematics.

Does the derivative/tangent of a suitably nice function exist at an endpoint? It depends upon who you ask. If the “derivative” is to exist then the standard “first principles” definition must be modified to be a one-sided limit. So, for our function *f* above, we would define

This is clearly not too difficult to do, and with this definition we find that f'(1) = -1, as implied by the Exam question. (Note that since *f* naturally extends to the right of *x *=1, the actual limit computation can be circumvented.) **However, and this is the fundamental point, not everyone does this.**

At the university level it is common, though far from universal, to permit differentiability at the endpoints. (The corresponding definition of continuity on a closed interval *is* essentially universal, at least after first year.) At the school level, however, the waters are much muddier. The VCE curriculum and the most popular and most respected Methods textbook appear to be completely silent on the issue. (This textbook also totally garbles the related issue of derivatives of piecewise defined (“hybrid”) functions.) We suspect that the vast majority of Methods teachers are similarly silent, and that the minority of teachers who *do* raise the issue would *not* in general permit differentiability at an endpoint.

In summary, it is perfectly acceptable to permit derivatives/tangents to graphs at their endpoints, and it is perfectly acceptable to proscribe them. It is also perfectly acceptable, at least at the school level, to avoid the issue entirely, as is done in the VCE curriculum, by most teachers and, in particular, in part (b) of the Exam question above.

What is blatantly unacceptable is for the VCAA examiners to spring a completely gratuitous endpoint derivative on students when the issue has never been raised. And what is pure and unadulterated madness is to spring an endpoint derivative after carefully and explicitly avoiding it on the immediately previous part of the question.

The Victorian Curriculum and Assessment Authority has a long tradition of scoring own goals. The question above, however, is spectacular. Here, the VCAA is like a goalkeeper grasping the ball firmly in both hands, taking careful aim, and flinging the ball into his own net.

**UPDATE (20/09/20)**

Above, we hammered Q9(c) on the 2017 Mathematical Methods, Exam 1. We regret not having hammered also the idiotically misleading diagram, but another issue has arisen, pointed out to us by frequent commenter SRK.

In Q9(b), students were asked to show that the derivative of is . as we noted, the question was pointlessly verbose in classic VCAA style, but no big deal; an easy 1-mark question. What could go wrong?

Well, what went wrong is that 2/3 of students scored 0/1 on this very easy question. How? The Examination Report explains:

*When answering ‘show that’ questions, students should include all steps to demonstrate exactly what was done, but many students often left steps out. A common pattern was to go straight from the first line of differentiation immediately to the final line, with no indication of obtaining a common denominator. *

For fuck’s sake.

The stark incompetence of VCAA is often stunning. And, the nasty, meaningless pedantry of the VCAA is often stunning. But, on a question like this, when you see the two in seamless combination, that’s when you realise that you’re in the presence of true greatness.

Thanks Marty,

I am a teacher (although I spent most of my teaching life in the IB system which is very different) and found this issue of errors to be quite puzzling. At a university, a lecturer quite possibly makes mistakes on their exam papers. This is forgivable because the time and resources required to proof-read are probably not there.

In VCE, where one would hope there are a lot more qualified proof-readers and a lot more of a budget for examinations, mistakes would not happen as frequently as seems to be the case.

Is the issue to do with the exam setters, the checkers or the system that brings these two together? My inclination is option C, but I have no evidence to support this conjecture.

Thanks, Number 8. I think the answer is D. All of the above. There is no excusing the number of flagrant errors, and there is plenty more to criticise on these exams than the flagrant errors. But I think also the VCE curriculum almost guarantees that errors and inanities will be common. I hope to write about this once I’ve written on all the 2017 errors.

Writing about the inanities in the VCE curriculum could easily surpass 100000 words.

But you already have a PhD, so perhaps use dot points.

(Dark humour implied)