Untried Methods

We’re sure we’ll live to regret this post, but yesterday’s VCE Methods Exam 1 looked like a good exam.

No, that’s not a set up for a joke. It actually looked like a nice exam. (It’s not online yet). Sure, there were some meh questions, the inevitable consequence of an incompetent study design. And yes, there was a minor Magritte aspect to the final question. And yes, it’s much easier to get an exam right if it’s uncorrupted by the idiocy of CAS, with the acid test being Exam 2. And yes, we could be plain wrong; we only gave the exam a cursory read, and if there’s a dodo it’s usually in the detail.

But for all that the exam genuinely looked good. The questions in general seemed mathematically natural. A couple of the questions also appeared to be difficult in a good, mathematical way, rather than in the familiar “What the Hell do they want?” manner.

What happened?

 

Inverted Logic

The 2018 Northern Hemisphere Mathematical Methods exams (1 and 2) are out. We didn’t spot any Magritte-esque lunacy, which was a pleasant surprise. In general, the exam questions were merely trivial, clumsy, contrived, calculator-infested and loathsomely ugly. So, all in all not bad by VCAA standards.

There, was, however, one notable question. The final multiple choice question on Exam 2 reads as follows:

Let f be a one-to-one differentiable function such that f (3) = 7, f (7) = 8, f′(3) = 2 and f′(7) = 3. The function g is differentiable and g(x) = f –1(x) for all x. g′(7) is equal to …

The wording is hilarious, at least it is if you’re not a frazzled Methods student in the midst of an exam, trying to make sense of such nonsense. Indeed, as we’ll see below, the question turned out to be too convoluted even for the examiners.

Of course –1 is a perfectly fine and familiar name for the inverse of f. It takes a special cluelessness to imagine that renaming –1 as g is somehow required or remotely helpful. The obfuscating wording, however, is the least of our concerns.

The exam question is intended to be a straight-forward application of the inverse function theorem. So, in Leibniz form dx/dy = 1/(dy/dx), though the exam question effectively requires the more explicit but less intuitive function form, 

    \[\boldsymbol {\left(f^{-1}\right)'(b) = \frac1{f'\left(f^{-1}(b)\right)}}.}\]

IVT is typically stated, and in particular the differentiability of –1 can be concluded, with suitable hypotheses. In this regard, the exam question needlessly hypothesising that the function g is differentiable is somewhat artificial. However it is not so simple in the school context to discuss natural hypotheses for IVT. So, underlying the ridiculous phrasing is a reasonable enough question.

What, then, is the problem? The problem is that IVT is not explicitly in the VCE curriculum. Really? Really.

Even ignoring the obvious issue this raises for the above exam question, the subliminal treatment of IVT in VCE is absurd. One requires plenty of inverse derivatives, even in a first calculus course. Yet, there is never any explicit mention of IVT in either Specialist or Methods, not even a hint that there is a common question with a universal answer.

All that appears to be explicit in VCE, and more in Specialist than Methods, is application of the chain rule, case by isolated case. So, one assumes the differentiability of –1 and and then differentiates –1(f(x)) in Leibniz form. For example, in the most respected Methods text the derivative of y = log(x) is somewhat dodgily obtained using the chain rule from the (very dodgily obtained) derivative of x = ey.

It is all very implicit, very case-by-case, and very Leibniz. Which makes the above exam question effectively impossible.

How many students actually obtained the correct answer? We don’t know since the Examiners’ Report doesn’t actually report anything. Being a multiple choice question, though, students had a 1 in 5 chance of obtaining the correct answer by dumb luck. Or, sticking to the more plausible answers, maybe even a 1 in 3 or 1 in 2 chance. That seems to be how the examiners stumbled upon the correct answer.

The Report’s solution to the exam question reads as follows (as of September 20, 2018):

f(3) = 7, f'(3) = 8, g(x) = f –1(x) , g‘(x) = 1/2 since

f'(x) x f'(y) = 1, g(x) = f'(x) = 1/f'(y).

The awfulness displayed above is a wonder to behold. Even if it were correct, the suggested solution would still bear no resemblance to the Methods curriculum, and it would still be unreadable. And the answer is not close to correct.

To be fair, The Report warns that its sample answers are “not intended to be exemplary or complete”. So perhaps they just forgot the further warning, that their answers are also not intended to be correct or comprehensible.

It is abundantly clear that the VCAA is incapable of putting together a coherent curriculum, let alone one that is even minimally engaging. Apparently it is even too much to expect the examiners to be familiar with their own crappy curriculum, and to be able to examine it, and to report on it, fairly and accurately.

VCAA Plays Dumb and Dumber

Late last year we posted on Madness in the 2017 VCE mathematics exams, on blatant errors above and beyond the exams’ predictably general clunkiness. For one (Northern Hemisphere) exam, the subsequent VCAA Report had already appeared; this Report was pretty useless in general, and specifically it was silent on the error and the surrounding mathematical crap. None of the other reports had yet appeared.

Now, finally, all the exam reports are out. God only knows why it took half a year, but at least they’re out. We have already posted on one particularly nasty piece of nitpicking nonsense, and now we can review the VCAA‘s own assessment of their five errors:

 

So, the VCAA responds to five blatant errors with five Trumpian silences. How should one describe such conduct? Unprofessional? Arrogant? Cowardly? VCAA-ish? All of the above?

 

Little Steps for Little Minds

Here’s a quick but telling nugget of awfulness from Victoria’s 2017 VCE maths exams. Q9 of the first (non-calculator) Methods Exam is concerned with the function

    \[\boldsymbol {f(x) = \sqrt{x}(1-x)\,.}\]

In Part (b) of the question students are asked to show that the gradient of the tangent to the graph of f” equals \boldsymbol{ \frac{1-3x}{2\sqrt{x}} } .

A normal human being would simply have asked for the derivative of f, but not much can go wrong, right? Expanding and differentiating, we have

    \[\boldsymbol {f'(x) = \frac{1}{2\sqrt{x}} - \frac32\sqrt{x}=\frac{1-3x}{2\sqrt{x}}\,.}\]

Easy, and done.

So, how is it that 65% of Methods students scored 0 on this contrived but routine 1-point question? Did they choke on “the gradient of the tangent to the graph of f” and go on to hunt for a question written in English?

The Examiners’ Report pinpoints the issue, noting that the exam question required a step-by-step demonstration …. And, [w]hen answering ‘show that’ questions, students should include all steps to demonstrate exactly what was done (emphasis added). So the Report implies, for example, that our calculation above would have scored 0 because we didn’t explicitly include the step of obtaining a common denominator.

Jesus H. Christ.

Any suggestion that our calculation is an insufficient answer for a student in a senior maths class is pedagogical and mathematical lunacy. This is obvious, even ignoring the fact that Methods questions way too often are flawed and/or require the most fantastic of logical leaps. And, of course, the instruction that “all steps” be included is both meaningless and utterly mad, and the solution in the Examiners’ Report does nothing of the sort. (Exercise: Try to include all steps in the computation and simplification of f’.)

This is just one 1-point question, but such infantilising nonsense is endemic in Methods. The subject is saturated with pointlessly prissy language and infuriating, nano-step nitpicking, none of which bears the remotest resemblance to real mathematical thought or expression.

What is the message of such garbage? For the vast majority of students, who naively presume that an educational authority would have some expertise in education, the message is that mathematics is nothing but soulless bookkeeping, which should be avoided at all costs. For anyone who knows mathematics, however, the message is that Victorian maths education is in the clutches of a heartless and entirely clueless antimathematical institution.

Fixations and Madness

Our sixth and final post on the 2017 VCE exam madness is on some recurring nonsense in Mathematical Methods. The post will be relatively brief, since a proper critique of every instance of the nonsense would be painfully long, and since we’ve said it all before.

The mathematical problem concerns, for a given function f, finding the solutions to the equation

    \[\boldsymbol{(1)\qquad\qquad f(x) \ = \ f^{-1}(x)\,.}\]

This problem appeared, in various contexts, on last month’s Exam 2 in 2017 (Section B, Questions 4(c) and 4(i)), on the Northern Hemisphere Exam 1 in 2017 (Questions 8(b) and 8(c)), on Exam 2 in 2011 (Section 2, Question 3(c)(ii)), and on Exam 2 in 2010 (Section 2, Question 1(a)(iii)).

Unfortunately, the technique presented in the three Examiners’ Reports for solving equation (1) is fundamentally wrong. (The Reports are here, here and here.) In synch with this wrongness, the standard textbook considers four misleading examples, and its treatment of the examples is infused with wrongness (Chapter 1F). It’s a safe bet that the forthcoming Report on the 2017 Methods Exam 2 will be plenty wrong.

What is the promoted technique? It is to ignore the difficult equation above, and to solve instead the presumably simpler equation

    \[ \boldsymbol{(2) \qquad\qquad  f(x) \ = \  x\,,}\]

or perhaps the equation

    \[\boldsymbol{(2)' \qquad\qquad f^{-1}(x)\ = \ x \,.}\]

Which is wrong.

It is simply not valid to assume that either equation (2) or (2)’ is equivalent to (1). Yes, as long as the inverse of f exists then equation (2)’ is equivalent to equation (2): a solution x to (2)’ will also be a solution to (2), and vice versa. And, yes, then any solution to (2) and (2)’ will also be a solution to (1). The converse, however, is in general false: a solution to (1) need not be a solution to (2) or (2)’.

It is easy to come up with functions illustrating this, or think about the graph above, or look here.

OK, the VCAA might argue that the exams (and, except for a couple of up-in-the-attic exercises, the textbook) are always concerned with functions for which solving (2) or (2)’ happens to suffice, so what’s the problem? The problem is that this argument would be idiotic.

Suppose that we taught students that roots of polynomials are always integers, instructed the students to only check for integer solutions, and then carefully arranged for the students to only encounter polynomials with integer solutions. Clearly, that would be mathematical and pedagogical crap. The treatment of equation (1) in Methods exams, and the close to universal treatment in Methods more generally, is identical.

OK, the VCAA might continue to argue that the students have their (stupifying) CAS machines at hand, and that the graphs of the particular functions under consideration make clear that solving (2) or (2)’ suffices. There would then be three responses:

(i) No one tests whether Methods students do anything like a graphical check, or anything whatsoever.

(ii) Hardly any Methods students do do anything. The overwhelming majority of students treat equations (1), (2) and (2)’ as automatically equivalent, and they have been given explicit license by the Examiners’ Reports to do so. Teachers know this and the VCAA knows this, and any claim otherwise is a blatant lie. And, for any reader still in doubt about what Methods students actually do, here’s a thought experiment: imagine the 2018 Methods exam requires students to solve equation (1) for the function f(x) = (x-2)/(x-1), and then imagine the consequences.

(iii) Even if students were implicitly or explicitly arguing from CAS graphics, “Look at the picture” is an absurdly impoverished way to think about or to teach mathematics, or pretty much anything. The power of mathematics is to be able take the intuition and to either demonstrate what appears to be true, or demonstrate that the intuition is misleading. Wise people are wary of the treachery of images; the VCAA, alas, promotes it.

The real irony and idiocy of this situation is that, with natural conditions on the function f, equation (1) is equivalent to equations (2) and (2)’, and that it is well within reach of Methods students to prove this. If, for example, f is a strictly increasing function then it can readily be proved that the three equations are equivalent. Working through and applying such results would make for excellent lessons and excellent exam questions.

Instead, what we have is crap. Every year, year after year, thousands of Methods students are being taught and are being tested on mathematical crap.

There’s Madness in the Methods

Yes, we’ve used that title before, but it’s a damn good title. And there is so much madness in Mathematical Methods to cover. And not only Methods. Victoria’s VCE exams are coming to an end, the maths exams are done, and there is all manner of new and astonishing nonsense to consider. This year, the Victorian Curriculum and Assessment Authority have outdone themselves.

Over the next week we’ll put up a series of posts on significant errors in the 2017 Methods, Specialist Maths and Further Maths exams, including in the mid-year Northern Hemisphere examsBy “significant error” we mean more than just a pointless exercise in button-pushing, or tone-deaf wording, or idiotic pseudomodelling, or aimless pedantry, all of which is endemic in VCE maths exams. A “significant error” in an exam question refers to a fundamental mathematical flaw with the phrasing, or with the intended answer, or with the (presumed or stated) method that students were supposed to use. Not all the errors that we shall discuss are large, but they are all definite errors, they are errors that would have (or at least should have) misled some students, and none of these errors should have occurred. (It is courtesy of diligent (and very annoyed) maths teachers that I learned of most of these questions.) Once we’ve documented the errors, we’ll post on the reasons that the errors are so prevalent, on the pedagogical and administrative climate that permits and encourages them.

Our first post concerns Exam 1 of Mathematical Methods. In the final question, Question 9, students consider the function \boldsymbol{ f(x) =\sqrt{x}(1-x)} on the closed interval [0,1], pictured below. In part (b), students are required to show that, on the open interval (0,1), “the gradient of the tangent to the graph of f” is (1-3x)/(2\sqrt{x}). A clumsy combination of calculation and interpretation, but ok. The problem comes when students then have to consider tangents to the graph.

In part (c), students take the angle θ in the picture to be 45 degrees. The pictured tangents then have slopes 1 and -1, and the students are required to find the equations of these two tangents. And therein lies the problem: it turns out that the “derivative”  of f is equal to -1 at the endpoint x = 1. However, though the natural domain of the function \sqrt{x}(1-x)} is [0,∞), the students are explicitly told that the domain of f is [0,1].

This is obvious and unmitigated madness.

Before we hammer the madness, however, let’s clarify the underlying mathematics.

Does the derivative/tangent of a suitably nice function exist at an endpoint? It depends upon who you ask. If the “derivative” is to exist then the standard “first principles” definition must be modified to be a one-sided limit. So, for our function f above, we would define

    \[f'(1) = \lim_{h\to0^-}\frac{f(1+h) - f(1)}{h}\,.\]

This is clearly not too difficult to do, and with this definition we find that f'(1) = -1, as implied by the Exam question. (Note that since f naturally extends to the right of =1, the actual limit computation can be circumvented.) However, and this is the fundamental point, not everyone does this.

At the university level it is common, though far from universal, to permit differentiability at the endpoints. (The corresponding definition of continuity on a closed interval is essentially universal, at least after first year.) At the school level, however, the waters are much muddier. The VCE curriculum and the most popular and most respected Methods textbook appear to be completely silent on the issue. (This textbook also totally garbles the related issue of derivatives of piecewise defined (“hybrid”) functions.) We suspect that the vast majority of Methods teachers are similarly silent, and that the minority of teachers who do raise the issue would not in general permit differentiability at an endpoint.

In summary, it is perfectly acceptable to permit derivatives/tangents to graphs at their endpoints, and it is perfectly acceptable to proscribe them. It is also perfectly acceptable, at least at the school level, to avoid the issue entirely, as is done in the VCE curriculum, by most teachers and, in particular, in part (b) of the Exam question above.

What is blatantly unacceptable is for the VCAA examiners to spring a completely gratuitous endpoint derivative on students when the issue has never been raised. And what is pure and unadulterated madness is to spring an endpoint derivative after carefully and explicitly avoiding it on the immediately previous part of the question.

The Victorian Curriculum and Assessment Authority has a long tradition of scoring own goals. The question above, however, is spectacular. Here, the VCAA is like a goalkeeper grasping the ball firmly in both hands, taking careful aim, and flinging the ball into his own net.

The Treachery of Images

Harry scowled at a picture of a French girl in a bikini. Fred nudged Harry, man-to-man. “Like that, Harry?” he asked.

“Like what?”

“The girl there.”

“That’s not a girl. That’s a piece of paper.”

“Looks like a girl to me.” Fred Rosewater leered.

“Then you’re easily fooled,” said Harry. It’s done with ink on a piece of paper. That girl isn’t lying there on the counter. She’s thousands of miles away, doesn’t even know we’re alive. If this was a real girl, all I’d have to do for a living would be to stay at home and cut out pictures of big fish.”

                       Kurt Vonnegut, God Bless you, Mr. Rosewater

 

It is fundamental to be able to distinguish appearance from reality. That it is very easy to confuse the two is famously illustrated by Magritte’s The Treachery of Images (La Trahison des Images):

The danger of such confusion is all the greater in mathematics. Mathematical images, graphs and the like, have intuitive appeal, but these images are mere illustrations of deep and easily muddied ideas. The danger of focussing upon the image, with the ideas relegated to the shadows, is a fundamental reason why the current emphasis on calculators and graphical software is so misguided and so insidious.

Which brings us, once again, to Mathematical Methods. Question 5 on Section Two of the second 2015 Methods exam is concerned with the function V:[0,5]\rightarrow\Bbb R, where

\phantom{\quad}  V(t) = de^{\frac{t}3} + (10-d)e^{\frac{-2t}3}\,.

Here, d \in (0,10) is a constant, with d=2 initially; students are asked to find the minimum (which occurs at t = \log_e8), and to graph V. All this is par for the course: a reasonable calculus problem thoroughly trivialised by CAS calculators. Predictably, things get worse.

In part (c)(i) of the problem students are asked to find “the set of possible values of d” for which the minimum of V occurs at t=0. (Part (c)(ii) similarly, and thus boringly and pointlessly, asks for which d the minimum occurs at t=5). Arguably, the set of possible values of d is (0,10), which of course is not what was intended; the qualification “possible” is just annoying verbiage, in which the examiners excel.

So, on to considering what the students were expected to have done for (c)(ii), a 2-mark question, equating to three minutes. The Examiners’ Report pointedly remarks that “[a]dequate working must be shown for questions worth more than one mark.” What, then, constituted “adequate working” for 5(c)(i)? The Examiners’ solution consists of first setting V'(0)=0 and solving to give d=20/3, and then … well, nothing. Without further comment, the examiners magically conclude that the answer to (c)(i) is 20/3 \leqslant d< 10.

Only in the Carrollian world of Methods could the examiners’ doodles be regarded as a summary of or a signpost to any adequate solution. In truth, the examiners have offered no more than a mathematical invocation, barely relevant to the question at hand: why should V having a stationary point at t=0 for d=20/3 have any any bearing on V for other values of d? The reader is invited to attempt a proper and substantially complete solution, and to measure how long it takes. Best of luck completing it within three minutes, and feel free to indicate how you went in the comments.

It is evident that the vast majority of students couldn’t make heads or tails of the question, which says more for them than the examiners. Apparently about half the students solved V'(0)=0 and included d = 20/3 in some form in their answer, earning them one mark. Very few students got further; 4% of students received full marks on the question (and similarly on (c)(ii)).

What did the examiners actually hope for? It is pretty clear that what students were expected to do, and the most that students could conceivably do in the allotted time, was: solve V'(0)=0 (i.e. press SOLVE on the machine); then, look at the graphs (on the machine) for two or three values of d; then, simply presume that the graphs of V for all d are sufficiently predictable to “conclude” that 20/3 is the largest value of d for which the (unique) turning point of V lies in [0,5]. If it is not immediately obvious that any such approach is mathematical nonsense, the reader is invited to answer (c)(i) for the function W:[0,5]\rightarrow\Bbb R where W(t) = (6-d)t^2 + (d-2)t.

Once upon a time, Victorian Year 12 students were taught mathematics, were taught to prove things. Now, they’re taught to push buttons and to gaze admiringly at pictures of big fish.

The Median is the Message

Our first post concerns an error in the 2016 Mathematical Methods Exam 2 (year 12 in Victoria, Australia). It is not close to the silliest mathematics we’ve come across, and not even the silliest error to occur in a Methods exam. Indeed, most Methods exams are riddled with nonsense. For several reasons, however, whacking this particular error is a good way to begin: the error occurs in a recent and important exam; the error is pretty dumb; it took a special effort to make the error; and the subsequent handling of the error demonstrates the fundamental (lack of) character of the Victorian Curriculum and Assessment Authority.

The problem, first pointed out to us by teacher and friend John Kermond, is in Section B of the exam and concerns Question 3(h)(ii). This question relates to a probability distribution with “probability density function”

    \[  f(x) =   \left\{\aligned &\frac{(210-x)e^{\frac{x-210}{20}}}{400} \qquad && 0\leqslant x \leqslant 210,\\ &0 && \text{elsewhere.} \endaligned\right.}\]

Now, anyone with a good nose for calculus is going to be thinking “uh-oh”. It is a fundamental property of a PDF that the total integral (underlying area) should equal 1. But how are all those integrated powers of e going to cancel out? Well, they don’t. What has been defined is only approximately a PDF,  with a total area of 1 - 23/2e^{21/2} \approx 0.9997. (It is easy to calculate the area exactly using integration by parts.)

Below we’ll discuss the absurdity of handing students a non-PDF, but back to the exam question. 3(h)(ii) asks the students to find the median of the “probability distribution”, correct to two decimal places. Since the question makes no sense for a non-PDF, of course the VCAA have shot themself in the foot. However, we can still attempt to make some sense of the question, which is when we discover that the VCAA has also shot themself in the other foot.

The median m of a probability distribution is the half-way point. So, in the integration context here we want the m for which

a)      \phantom{\quad}  \int\limits_0^m f(x)\,{\rm d}x = \dfrac12.

As such, this question was intended to be just another CAS exercise, and so both trivial and pointless: push the button, write down the answer and on to the next question. The problem is, the median can also be determined by the equation

b)     \phantom{\quad}  \int\limits_m^{210} f(x)\,{\rm d}x = \dfrac12,

or by the equation

c)     \phantom{\quad} \int\limits_0^m f(x)\,{\rm d}x = \int\limits_m^{210} f(x)\,{\rm d}x.

And, since our function is only approximately a PDF, these three equations necessarily give three different answers: to the demanded two decimal places the answers are respectively 176.45, 176.43 and 176.44. Doh!

What to make of this? There are two obvious questions.

1. How did the VCAA end up with a PDF which isn’t a PDF?

It would be astonishing if all of the exam’s writers and checkers failed to notice the integral was not 1. It is even more astonishing if all the writers-checkers recognised and were comfortable with a non-PDF. Especially since the VCAA can be notoriously, absurdly fussy about the form and precision of answers (see below).

2. How was the error in 3(h)(ii) not detected?

It should have been routine for this mistake to have been detected and corrected with any decent vetting. Yes, we all make mistakes. Mistakes in very important exams, however, should not be so common, and the VCAA seems to make a habit of it.

OK, so the VCAA stuffed up. It happens. What happened next? That’s where the VCAA’s arrogance and cowardice shine bright for all to see. The one and only sentence in the Examiners’ Report that remotely addresses the error is:

“As [the] function f  is a close approximation of the [???] probability density function, answers to the nearest integer were accepted”. 

The wording is clumsy, and no concession has been made that the best (and uniquely correct) answer is “The question is stuffed up”, but it seems that solutions to all of a), b) and c) above were accepted. The problem, however, isn’t with the grading of the question.

It is perhaps too much to expect an insufferably arrogant VCAA to apologise, to express anything approximating regret for yet another error. But how could the VCAA fail to understand the necessity of a clear and explicit acknowledgement of the error? Apart from demonstrating total gutlessness, it is fundamentally unprofessional. How are students and teachers, especially new teachers, supposed to read the exam question and report? How are students and teachers supposed to approach such questions in the future? Are they still expected to employ the precise definitions that they have learned? Or, are they supposed to now presume that near enough is good enough?

For a pompous finale, the Examiners’ Report follows up by snarking that, in writing the integral for the PDF, “The dx was often missing from students’ working”. One would have thought that the examiners might have dispensed with their finely honed prissiness for that one paragraph. But no. For some clowns it’s never the wrong time to whine about a missing dx.

UPDATE (16 June): In the comments below, Terry Mills has made the excellent point that the prior question on the exam is similarly problematic. 3(h)(i) asks students to calculate the mean of the probability distribution, which would normally be calculated as \int xf(x)\,{\rm d}x. For our non-PDF, however, we should should normalise by dividing by \int f(x)\,{\rm d}x. To the demanded two decimal places, that changes the answer from the Examiners’ Report’s 170.01 to 170.06.