VCAA Plays Dumb and Dumber

Late last year we posted on Madness in the 2017 VCE mathematics exams, on blatant errors above and beyond the exams’ predictably general clunkiness. For one (Northern Hemisphere) exam, the subsequent VCAA Report had already appeared; this Report was pretty useless in general, and specifically it was silent on the error and the surrounding mathematical crap. None of the other reports had yet appeared.

Now, finally, all the exam reports are out. God only knows why it took half a year, but at least they’re out. We have already posted on one particularly nasty piece of nitpicking nonsense, and now we can review the VCAA‘s own assessment of their five errors:

 

So, the VCAA responds to five blatant errors with five Trumpian silences. How should one describe such conduct? Unprofessional? Arrogant? Cowardly? VCAA-ish? All of the above?

 

Little Steps for Little Minds

Here’s a quick but telling nugget of awfulness from Victoria’s 2017 VCE maths exams. Q9 of the first (non-calculator) Methods Exam is concerned with the function

    \[\boldsymbol {f(x) = \sqrt{x}(1-x)\,.}\]

In Part (b) of the question students are asked to show that the gradient of the tangent to the graph of f” equals \boldsymbol{ \frac{1-3x}{2\sqrt{x}} } .

A normal human being would simply have asked for the derivative of f, but not much can go wrong, right? Expanding and differentiating, we have

    \[\boldsymbol {f'(x) = \frac{1}{2\sqrt{x}} - \frac32\sqrt{x}=\frac{1-3x}{2\sqrt{x}}\,.}\]

Easy, and done.

So, how is it that 65% of Methods students scored 0 on this contrived but routine 1-point question? Did they choke on “the gradient of the tangent to the graph of f” and go on to hunt for a question written in English?

The Examiners’ Report pinpoints the issue, noting that the exam question required a step-by-step demonstration …. And, [w]hen answering ‘show that’ questions, students should include all steps to demonstrate exactly what was done (emphasis added). So the Report implies, for example, that our calculation above would have scored 0 because we didn’t explicitly include the step of obtaining a common denominator.

Jesus H. Christ.

Any suggestion that our calculation is an insufficient answer for a student in a senior maths class is pedagogical and mathematical lunacy. This is obvious, even ignoring the fact that Methods questions way too often are flawed and/or require the most fantastic of logical leaps. And, of course, the instruction that “all steps” be included is both meaningless and utterly mad, and the solution in the Examiners’ Report does nothing of the sort. (Exercise: Try to include all steps in the computation and simplification of f’.)

This is just one 1-point question, but such infantilising nonsense is endemic in Methods. The subject is saturated with pointlessly prissy language and infuriating, nano-step nitpicking, none of which bears the remotest resemblance to real mathematical thought or expression.

What is the message of such garbage? For the vast majority of students, who naively presume that an educational authority would have some expertise in education, the message is that mathematics is nothing but soulless bookkeeping, which should be avoided at all costs. For anyone who knows mathematics, however, the message is that Victorian maths education is in the clutches of a heartless and entirely clueless antimathematical institution.

The Wild and Woolly West

So, much crap, so little time.

OK, after a long period of dealing with other stuff (shovelled on by the evil Mathologer), we’re back. There’s a big backlog, and in particular we’re working hard to find an ounce of sense in Gonski, Version N. But, first, there’s a competition to finalise, and an associated educational authority to whack.

It appears that no one pays any attention to Western Australian maths education. This, as we’ll see, is a good thing. (Alternatively, no one gave a stuff about the prize, in which case, fair enough.) So, congratulations to Number 8, who wins by default. We’ll be in touch.

A reminder, the competition was to point out the nonsense in Part 1 and Part 2 of the 2017 West Australian Mathematics Applications Exam. As with our previous challenge, this competition was inspired by one specifically awful question. The particular Applications question, however, should not distract from the Exam’s very general clunkiness. The entire Exam is amateurish, as one rabble rouser expressed it, plagued by clumsy mathematics and ambiguous phrasing.

The heavy lifting in the critique below is due to the semi-anonymous Charlie. So, a very big thanks to Charlie, specifically for his detailed remarks on the Exam, and more generally for not being willing to accept that a third rate exam is simply par for WA’s course. (Hello, Victorians? Anyone there? Hello?)

We’ll get to the singularly awful question, and the singularly awful formal response, below.  First, however, we’ll provide a sample of some of the examiners’ lesser crimes. None of these other crimes are hanging offences, though some slapping wouldn’t go astray, and a couple questions probably warrant a whipping. We won’t go into much detail; clarification can be gained by referring to the Exam papers. We also don’t address the Exam as a whole in terms of the adequacy of its coverage of the Applications curriculum, though there are apparently significant issues in this regard.

Question 1, the phrasing is confusing in parts, as was noted by Number 8. It would have been worthwhile for the examiners to explicitly state that the first term Tn corresponds to n = 1. Also, when asking for the first term ( i.e. the first Tn) less than 500, it would have helped to have specifically asked for the corresponding index n (which is naturally obtained as a first step), and then for Tn.

Question 2(b)(ii), it is a little slack to claim that “an allocation of delivery drivers cannot me made yet”.

Question 5 deals with a survey, a table of answers to a yes-or-no question. It grates to have the responses to the question recorded as “agree” or “disagree”. In part (b), students are asked to identify the explanatory variable; the answer, however, depends upon what one is seeking to explain.

Question 6(a) is utterly ridiculous. The choice for the student is either to embark upon a laborious and calculator-free and who-gives-a-damn process of guess-and-check-and-cross-your-fingers, or to solve the travelling salesman problem.

Question 8(b) is clumsily and critically ambiguous, since it is not stated whether the payments are to be made at the beginning or the end of each quarter.

Question 10 involves some pretty clunky modelling. In particular, starting with 400 bacteria in a dish is out by an order of magnitude, or six.

Question 11(d) is worded appallingly. We are told that one of two projects will require an extra three hours to compete. Then we have to choose which project “for the completion time to be at a minimum”. Yes, one can make sense of the question, but it requires a monster of an effort.

Question 14 is fundamentally ambiguous, in the same manner as Question 8(b); it is not indicated whether the repayments are to be made at the beginning or end of each period.

 

That was good fun, especially the slapping. But now it’s time for the main event:

QUESTION 3

Question 3(a) concerns a planar graph with five faces and five vertices, A, B, C, D and E:

What is wrong with this question? As evinced by the graphs pictured above, pretty much everything.

As pointed out by Number 8, Part (i) can only be answered (by Euler’s formula) if the graph is assumed to be connected. In Part (ii), it is weird and it turns out to be seriously misleading to refer to “the” planar graph. Next, the Hamiltonian cycle requested in Part (iii) is only guaranteed to exist if the graph is assumed to be both connected and simple. Finally, in Part (iv) any answer is possible, and the answer is not uniquely determined even if we restrict to simple connected graphs.

It is evident that the entire question is a mess. Most of the question, though not Part (iv), is rescued by assuming that any graph should be connected and simple. There is also no reason, however, why students should feel free or obliged to make that assumption. Moreover, any such reading of 3(a) would implicitly conflict with 3(b), which explicitly refers to a “simple connected graph” three times.

So, how has WA’s Schools Curriculum and Standards Authority subsequently addressed their mess? This is where things get ridiculous, and seriously annoying. The only publicly available document discussing the Exam is the summary report, which is an accomplished exercise in saying nothing. Specifically, this report makes no mention of the many issues with the Exam. More generally, the summary report says little of substance or of interest to anyone, amounting to little more than admin box-ticking.

The first document that addresses Question 3 in detail is the non-public graders’ Marking Key. The Key begins with the declaration that it is “an explicit statement about [sic] what the examining panel expect of candidates when they respond to particular examination items.” [emphasis added].

What, then, are the explicit expectations in the Marking Key for Question 3(a)? In Part (i) Euler’s formula is applied without comment. For Part (ii) a sample graph is drawn, which happens to be simple, connected and semi-Eulerian; no indication is given that other, fundamentally different graphs are also possible. For Part (iii), a Hamiltonian cycle is indicated for the sample graph, with no indication that non-Hamiltonian graphs are also possible. In Part (iv), it is declared that “the” graph is semi-Eulerian, with no indication that the graph may non-Eulerian (even if simple and connected) or Eulerian.

In summary, the Marking Key makes not a single mention of graphs being simple or connected, nor what can happen if they are not. If the writers of the Key were properly aware of these issues they have given no such indication. The Key merely confirms and compounds the errors in the Exam.

Question 3 is also addressed, absurdly, in the non-public Examination Report. The Report notes that Question 3(a) failed to explicitly state “the” graph was assumed to be connected, but that “candidates made this assumption [but not the assumption of simplicity?]; particularly as they were required to determine a Hamiltonian cycle for the graph in part (iii)”. That’s it.

Well, yes, it’s obviously the students’ responsibility to look ahead at later parts of a question to determine what they should assume in earlier parts. Moreover, if they do so, they may, unlike the examiners, make proper and sufficient assumptions. Moreover, they may observe that no such assumptions are sufficient for the final part of the question.

Of course what almost certainly happened is that the students constructed the simplest graph they could, which in the vast majority of cases would have been simple and connected and Hamiltonian. But we simply cannot tell how many students were puzzled, or for how long, or whether they had to start from scratch after drawing a “wrong” graph.

In any case, the presumed fact that most (but not all) students were unaffected does not alter the other facts: that the examiners bollocksed the question; that they then bollocksed the Marking Key; that they then bollocksed the explanation of both. And, that SCSA‘s disingenuous and incompetent ass-covering is conveniently hidden from public view.

The SCSA is not the most dishonest or inept educational authority in Australia, and their Applications Exam is not the worst of 2017. But one has to hand it to them, they’ve given it the old college try.

Polynomialy Perverse

What, with its stupid curriculastupid texts and really monumentally stupid exams, it’s difficult to imagine a wealthy Western country with worse mathematics education than Australia. Which is why God gave us New Zealand.

Earlier this year we wrote about the first question on New Zealand’s 2016 Level 1 algebra exam:

A rectangle has an area of  \bf x^2+5x-36. What are the lengths of the sides of the rectangle in terms of  \bf x.

Obviously, the expectation was for the students to declare the side lengths to be the linear factors x – 4 and x + 9, and just as obviously this is mathematical crap. (Just to hammer the point, set x = 5, giving an area of 14, and think about what the side lengths “must” be.)

One might hope that, having inflicted this mathematical garbage on a nation of students, the New Zealand Qualifications Authority would have been gently slapped around by a mathematician or two, and that the error would not be repeated. One might hope this, but, in these idiot times, it would be very foolish to expect it.

A few weeks ago, New Zealand maths education was in the news (again). There was lots of whining about “disastrous” exams, with “impossible” questions, culminating in a pompous petition, and ministerial strutting and general hand-wringing. Most of the complaints, however, appear to be pretty trivial; sure, the exams were clunky in certain ways, but nothing that we could find was overly awful, and nothing that warranted the subsequent calls for blood.

What makes this recent whining so funny is the comparison with the deafening silence in September. That’s when the 2017 Level 1 Algebra Exams appeared, containing the exact same rectangle crap as in 2016 (Question 3(a)(i) and Question 2(a)(i)). And, as in 2016, there is no evidence that anyone in New Zealand had the slightest concern.

People like to make fun of all the sheep in New Zealand, but there’s many more sheep there than anyone suspects.

The Treachery of Images

Harry scowled at a picture of a French girl in a bikini. Fred nudged Harry, man-to-man. “Like that, Harry?” he asked.

“Like what?”

“The girl there.”

“That’s not a girl. That’s a piece of paper.”

“Looks like a girl to me.” Fred Rosewater leered.

“Then you’re easily fooled,” said Harry. It’s done with ink on a piece of paper. That girl isn’t lying there on the counter. She’s thousands of miles away, doesn’t even know we’re alive. If this was a real girl, all I’d have to do for a living would be to stay at home and cut out pictures of big fish.”

                       Kurt Vonnegut, God Bless you, Mr. Rosewater

 

It is fundamental to be able to distinguish appearance from reality. That it is very easy to confuse the two is famously illustrated by Magritte’s The Treachery of Images (La Trahison des Images):

The danger of such confusion is all the greater in mathematics. Mathematical images, graphs and the like, have intuitive appeal, but these images are mere illustrations of deep and easily muddied ideas. The danger of focussing upon the image, with the ideas relegated to the shadows, is a fundamental reason why the current emphasis on calculators and graphical software is so misguided and so insidious.

Which brings us, once again, to Mathematical Methods. Question 5 on Section Two of the second 2015 Methods exam is concerned with the function V:[0,5]\rightarrow\Bbb R, where

\phantom{\quad}  V(t) = de^{\frac{t}3} + (10-d)e^{\frac{-2t}3}\,.

Here, d \in (0,10) is a constant, with d=2 initially; students are asked to find the minimum (which occurs at t = \log_e8), and to graph V. All this is par for the course: a reasonable calculus problem thoroughly trivialised by CAS calculators. Predictably, things get worse.

In part (c)(i) of the problem students are asked to find “the set of possible values of d” for which the minimum of V occurs at t=0. (Part (c)(ii) similarly, and thus boringly and pointlessly, asks for which d the minimum occurs at t=5). Arguably, the set of possible values of d is (0,10), which of course is not what was intended; the qualification “possible” is just annoying verbiage, in which the examiners excel.

So, on to considering what the students were expected to have done for (c)(ii), a 2-mark question, equating to three minutes. The Examiners’ Report pointedly remarks that “[a]dequate working must be shown for questions worth more than one mark.” What, then, constituted “adequate working” for 5(c)(i)? The Examiners’ solution consists of first setting V'(0)=0 and solving to give d=20/3, and then … well, nothing. Without further comment, the examiners magically conclude that the answer to (c)(i) is 20/3 \leqslant d< 10.

Only in the Carrollian world of Methods could the examiners’ doodles be regarded as a summary of or a signpost to any adequate solution. In truth, the examiners have offered no more than a mathematical invocation, barely relevant to the question at hand: why should V having a stationary point at t=0 for d=20/3 have any any bearing on V for other values of d? The reader is invited to attempt a proper and substantially complete solution, and to measure how long it takes. Best of luck completing it within three minutes, and feel free to indicate how you went in the comments.

It is evident that the vast majority of students couldn’t make heads or tails of the question, which says more for them than the examiners. Apparently about half the students solved V'(0)=0 and included d = 20/3 in some form in their answer, earning them one mark. Very few students got further; 4% of students received full marks on the question (and similarly on (c)(ii)).

What did the examiners actually hope for? It is pretty clear that what students were expected to do, and the most that students could conceivably do in the allotted time, was: solve V'(0)=0 (i.e. press SOLVE on the machine); then, look at the graphs (on the machine) for two or three values of d; then, simply presume that the graphs of V for all d are sufficiently predictable to “conclude” that 20/3 is the largest value of d for which the (unique) turning point of V lies in [0,5]. If it is not immediately obvious that any such approach is mathematical nonsense, the reader is invited to answer (c)(i) for the function W:[0,5]\rightarrow\Bbb R where W(t) = (6-d)t^2 + (d-2)t.

Once upon a time, Victorian Year 12 students were taught mathematics, were taught to prove things. Now, they’re taught to push buttons and to gaze admiringly at pictures of big fish.

Factoring in the Stupidity

It is very brave to claim that one has found the stupidest maths exam question of all time. And the claim is probably never going to be true: there will always be some poor education system, in rural Peru or wherever, doing something dumber than anything ever done before. For mainstream exams in wealthy Western countries, however, New Zealand has come up with something truly exceptional.

Last year, New Zealand students at Year 11 sat one of two algebra exams administered by the New Zealand Qualifications Authority. The very first question on the second exam reads:

A rectangle has an area of  \bf x^2+5x-36. What are the lengths of the sides of the rectangle in terms of  \bf x.

The real problem here is to choose the best answer, which we can probably all agree is sides of length \pi and (x^2+5x-36)/\pi.

OK, clearly what was intended was for students to factorise the quadratic and to declare the factors as the sidelengths of the rectangle. Which is mathematical lunacy. It is simply wrong.

Indeed, the question would arguably still have been wrong, and would definitely still have been awful, even if it had been declared that x has a unit of length: who wants students to be thinking that the area of a rectangle uniquely determines its sidelengths? But, even that tiny sliver of sense was missing.

So, what did students do with this question? (An equivalent question, 3(a)(i), appeared on the first exam.) We’re guessing that, seeing no alternative, the majority did exactly what was intended and factorised the quadratic. So, no harm done? Hah! It is incredible that such a question could make it onto a national exam, but it gets worse.

The two algebra exams were widely and strongly criticised, by students and teachers and the media. People complained that the exams were too difficult and too different in style from what students and teachers had been led to expect. Both types of criticism may well have been valid. For all of the public criticism of the exams, however, we could find no evidence of the above question or its Exam 1 companion being flagged. Plenty of complaining about hard questions, plenty of complaining about unexpected questions, but not a word about straight out mathematical crap.

So, not only do questions devoid of mathematical sense appear on a nationwide exam. It then appears that the entire nation of students is being left to accept that this is what mathematics is: meaningless autopilot calculation. Well done, New Zealand. You’ve made the education authorities in rural Peru feel very much better about themselves.

Accentuate the Negative

Each year about a million Australian school students are required to sit the Government’s NAPLAN tests. Produced by ACARA, the same outfit responsible for the stunning Australian Curriculum, these tests are expensive, annoying and pointless. In particular it is ridiculous for students to sit a numeracy test, rather than a test on arithmetic or more broadly on mathematics. It guarantees that the general focus will be wrong and that specific weirdnesses will abound. The 2017 NAPLAN tests, conducted last week, have not disappointed. Today, however, we have other concerns.

Wading into NAPLAN’s numeracy quagmire, one can often find a nugget or two of glowing wrongness. Here is a question from the 2017 Year 9 test:

In this inequality is a whole number.

\color{blue} \dfrac7{n} \boldsymbol{<} \dfrac57

What is the smallest possible value for n to make this inequality true?

The wording is appalling, classic NAPLAN. They could have simply asked:

What is the smallest whole number n for which \color{red} \dfrac7{n} \boldsymbol{<} \dfrac57\, ?

But of course the convoluted wording is the least of our concerns. The fundamental problem is that the use of the expression “whole number” is disastrous.

Mathematicians would avoid the expression “whole number”, but if pressed would most likely consider it a synonym for “integer”, as is done in the Australian Curriculum (scroll down) and some dictionaries. With this interpretation, where the negative integers are included, the above NAPLAN question obviously has no solution. Sometimes, including in, um, the Australian Curriculum (scroll down), “whole number” is used to refer to only the nonnegative integers or, rarely, to only the positive integers. With either of these interpretations the NAPLAN question is pretty nice, with a solution n = 10. But it remains the case that, at best, the expression “whole number” is irretrievably ambiguous and the NAPLAN question is fatally flawed.

Pointing out an error in a NAPLAN test is like pointing out one of Donald Trump’s lies: you feel you must, but doing so inevitably distracts from the overall climate of nonsense and nastiness. Still, one can hope that ACARA will be called on this, will publicly admit that they stuffed up, and will consider employing a competent mathematician to vet future questions. Unfortunately, ACARA is just about as inviting of criticism and as open to admitting error as Donald Trump.