VCAA Plays Dumb and Dumber

Late last year we posted on Madness in the 2017 VCE mathematics exams, on blatant errors above and beyond the exams’ predictably general clunkiness. For one (Northern Hemisphere) exam, the subsequent VCAA Report had already appeared; this Report was pretty useless in general, and specifically it was silent on the error and the surrounding mathematical crap. None of the other reports had yet appeared.

Now, finally, all the exam reports are out. God only knows why it took half a year, but at least they’re out. We have already posted on one particularly nasty piece of nitpicking nonsense, and now we can review the VCAA‘s own assessment of their five errors:

 

So, the VCAA responds to five blatant errors with five Trumpian silences. How should one describe such conduct? Unprofessional? Arrogant? Cowardly? VCAA-ish? All of the above?

 

WitCH 3

First, a quick note about these WitCHes. Any reasonable mathematician looking at such text extracts would immediately see the mathematical flaw(s) and would wonder how such half-baked nonsense could be published. We are aware, however, that for teachers and students, or at least Australian teachers and students, it is not nearly so easy. Since school mathematics is completely immersed in semi-sense, it is difficult to know the rules of the game. It is also perhaps difficult to know how a tentative suggestion might be received on a snarky blog such as this. We’ll just say, though we have little time for don’t-know-as-much-as-they-think textbook writers, we’re very patient with teachers and students who are honestly trying to figure out what’s what.

Now onto WitCH 3, which follows on from WitCH 2, coming from the same chapter of Cambridge’s Specialist Mathematics VCE Units 3 & 4 (2018).* The extract is below, and please post your thoughts in the comments. Also a reminder, WitCH 1 and WitCH 2 are still there, awaiting proper resolution. Enjoy.

* Cambridge is a good target, since they are the most respected of standard Australian school texts. We will, however, be whacking other publishers, and we’re always open to suggestion. Just email if you have a good WitCH candidate, or crap of any kind you wish to be attacked.

Tweel’s Mathematical Puzzle

Tweel is one of the all-time great science fiction characters, the hero of Stanley G. Weinbaum’s wonderful 1934 story, A Martian Odyssey. The story is set on Mars in the 21st century and begins with astronaut Dick Jarvis crashing his mini-rocket. Jarvis then happens upon the ostrich-like Tweel being attacked by a tentacled monster. Jarvis saves Tweel, they become friends and Tweel accompanies Jarvis on his long journey back to camp and safety, the two meeting all manner of exotic Martians along the way.

A Martian Odyssey is great fun, fantastically inventive pulp science fiction, but the weird, endearing and strangely intelligent Tweel raises the story to another level. Tweel and Jarvis attempt to communicate, and Tweel learns a few English words while Jarvis can make no sense of Tweel’s sounds, is simply unable to figure out how Tweel thinks. However, Jarvis gets an idea:

“After a while I gave up the language business, and tried mathematics. I scratched two plus two equals four on the ground, and demonstrated it with pebbles. Again Tweel caught the idea, and informed me that three plus three equals six.”

That gave them a minimal form of communication and Tweel turns out to be very resourceful with the little mathematics they share. Coming across a weird rock creature, Tweel describes the creature as

“No one-one-two. No two-two-four”. 

Later Tweel describes some crazy barrel creatures:

“One-one-two yes! Two-two-four no!” 

A Martian Odyssey works so well because Weinbaum simply describes the craziness that Jarvis encounters, with no attempt to explain it. Tweel is just sufficiently familar – a few words, a little arithmetic and a sense of loyalty – to make the craziness seem meaningful if still not comprehensible.

But now, here’s the puzzle. The communication between Jarvis and Tweel depends upon the universality of mathematics, that all intelligent creatures will understand and agree that 1 + 1 = 2 and 2 + 2 = 4, and so forth.

But why? Why is 1 + 1 = 2? Why is 2 + 2 = 4?

The answers are perhaps not so obvious. First, however, you should go read Weinbaum’s awesome story (and the sequel). Then ponder the puzzle.

What is this Crap Here?

OK, Dear Reader, you’ve got work to do.

So far on this blog we haven’t attacked textbooks much at all. That’s because Australian maths texts are, in the main, well-written and mathematically sound.

Yep, just kidding. Of course the texts are pretty much universally and uniformly awful. Choosing a random page from almost any text, one is pretty much guaranteed to find something ranging from annoying to excruciating. But, the very extent of the awfulness makes it difficult and time-consuming and tiring to grasp and to critique any one specific piece of the awful puzzle.

The Evil Mathologer, however, has come up with a very good idea: just post a screenshot of a particularly awful piece of text, and leave others to think and to write about it. So, here we go.

Our first WitCH sample, below, comes courtesy of the Evil Mathologer and is from Cambridge Essentials, Year 9 (2018). You, Dear Reader, are free to simply admire the awfulness. You may, however, go further, and what you might do depends upon who you are:

  • If you believe you can pinpoint the awfulness in the excerpt then feel free to spell it out in the comments, in small or great detail. You could also offer suggestions on how the ideas could have been presented correctly and coherently. You are also free to ponder how this nonsense came to be, what a teacher or student should do if they have to deal with this nonsense, whether we can stop such nonsense,* and so on.
  • If you don’t know or, worse, don’t believe the excerpt below is awful then you should quickly find someone to explain to you why it is.

Here it is. Enjoy. (Updated below.)

* We can’t.

Update

Following on from the comments, here is a summary of the issues with the page above. We also hope to post generally on index laws in the near future.

  • The major crime is that the initial proof is ass-backwards. 91/2 = √9 by definition, and that’s it. It is then a consequence of such definitions that the index laws continue to hold for fractional indices.
  • Beginning with 91/2 is pedagogically weird, since it simplifies to 3, clouding the issue.
  • The phrasing “∛5 is irrational and [sic] cannot be expressed as a fraction” is off-key.
  • The expression “with no repeated pattern” is vague and confusing.
  • The term “surd” is common but is close to meaningless.
  • Exploring irrationality with a calculator is non-sensical and derails meaningful exploration.
  • Overall, the page is long, cluttered and clumsy (and wrong). It is a pretty safe bet that few teachers and fewer students ever attempt to read it.

Little Steps for Little Minds

Here’s a quick but telling nugget of awfulness from Victoria’s 2017 VCE maths exams. Q9 of the first (non-calculator) Methods Exam is concerned with the function

    \[\boldsymbol {f(x) = \sqrt{x}(1-x)\,.}\]

In Part (b) of the question students are asked to show that the gradient of the tangent to the graph of f” equals \boldsymbol{ \frac{1-3x}{2\sqrt{x}} } .

A normal human being would simply have asked for the derivative of f, but not much can go wrong, right? Expanding and differentiating, we have

    \[\boldsymbol {f'(x) = \frac{1}{2\sqrt{x}} - \frac32\sqrt{x}=\frac{1-3x}{2\sqrt{x}}\,.}\]

Easy, and done.

So, how is it that 65% of Methods students scored 0 on this contrived but routine 1-point question? Did they choke on “the gradient of the tangent to the graph of f” and go on to hunt for a question written in English?

The Examiners’ Report pinpoints the issue, noting that the exam question required a step-by-step demonstration …. And, [w]hen answering ‘show that’ questions, students should include all steps to demonstrate exactly what was done (emphasis added). So the Report implies, for example, that our calculation above would have scored 0 because we didn’t explicitly include the step of obtaining a common denominator.

Jesus H. Christ.

Any suggestion that our calculation is an insufficient answer for a student in a senior maths class is pedagogical and mathematical lunacy. This is obvious, even ignoring the fact that Methods questions way too often are flawed and/or require the most fantastic of logical leaps. And, of course, the instruction that “all steps” be included is both meaningless and utterly mad, and the solution in the Examiners’ Report does nothing of the sort. (Exercise: Try to include all steps in the computation and simplification of f’.)

This is just one 1-point question, but such infantilising nonsense is endemic in Methods. The subject is saturated with pointlessly prissy language and infuriating, nano-step nitpicking, none of which bears the remotest resemblance to real mathematical thought or expression.

What is the message of such garbage? For the vast majority of students, who naively presume that an educational authority would have some expertise in education, the message is that mathematics is nothing but soulless bookkeeping, which should be avoided at all costs. For anyone who knows mathematics, however, the message is that Victorian maths education is in the clutches of a heartless and entirely clueless antimathematical institution.

NAPLAN’s Numeracy Test

NAPLAN has been much in the news of late, with moves for the tests to go online while simultaneously there have been loud calls to scrap the tests entirely. And, the 2018 NAPLAN tests have just come and gone. We plan to write about all this in the near future, and in particular we’re curious to see if the 2018 tests can top 2017’s clanger. For now, we offer a little, telling tidbit about ACARA.

In 2014, we submitted FOI applications to ACARA for the 2012-2014 NAPLAN Numeracy tests. This followed a long and bizarre but ultimately successful battle to formally obtain the 2008-2011 tests, now available here: some, though far from all, of the ludicrous details of that battle are documented here. Our requests for the 2012-2014 papers were denied by ACARA, then denied again after ACARA’s internal “review”. They were denied once more by the Office of the Australian Information Commissioner. We won’t go into OAIC’s decision here, except to state that we regard it as industry-capture idiocy. We lacked the energy and the lawyers, however, to pursue the matter further.

Here, we shall highlight one hilarious component of ACARA’s reasoning. As part of their review of our FOI applications, ACARA was obliged under the FOI Act to consider the public interest arguments for or against disclosure. In summary, ACARA’s FOI officer evaluated the arguments for disclosure as follows:

  • Promoting the objects of the FOI Act — 1/10
  • Informing a debate on a matter of public importance — 1/10
  • Promoting effective oversight of public expenditure — 0/10

Yes, the scoring is farcical and self-serving, but let’s ignore that.

ACARA’s FOI officer went on to “total” the public interest arguments in favour of disclosure. They obtained a “total” of 2/10.

Seriously.

We then requested an internal review, pointing out, along with much other nonsense, ACARA’s FOI officer’s dodgy scoring and dodgier arithmetic. The internal “review” was undertaken by ACARA’s CEO. His “revised” scoring was as follows:

  • Promoting the objects of the FOI Act — 1/10
  • Informing a debate on a matter of public importance — 1/10
  • Promoting effective oversight of public expenditure — 0/10

And his revised total? Once again, 2/10.

Seriously.

These are the clowns in charge of testing Australian students’ numeracy.

The Wild and Woolly West

So, much crap, so little time.

OK, after a long period of dealing with other stuff (shovelled on by the evil Mathologer), we’re back. There’s a big backlog, and in particular we’re working hard to find an ounce of sense in Gonski, Version N. But, first, there’s a competition to finalise, and an associated educational authority to whack.

It appears that no one pays any attention to Western Australian maths education. This, as we’ll see, is a good thing. (Alternatively, no one gave a stuff about the prize, in which case, fair enough.) So, congratulations to Number 8, who wins by default. We’ll be in touch.

A reminder, the competition was to point out the nonsense in Part 1 and Part 2 of the 2017 West Australian Mathematics Applications Exam. As with our previous challenge, this competition was inspired by one specifically awful question. The particular Applications question, however, should not distract from the Exam’s very general clunkiness. The entire Exam is amateurish, as one rabble rouser expressed it, plagued by clumsy mathematics and ambiguous phrasing.

The heavy lifting in the critique below is due to the semi-anonymous Charlie. So, a very big thanks to Charlie, specifically for his detailed remarks on the Exam, and more generally for not being willing to accept that a third rate exam is simply par for WA’s course. (Hello, Victorians? Anyone there? Hello?)

We’ll get to the singularly awful question, and the singularly awful formal response, below.  First, however, we’ll provide a sample of some of the examiners’ lesser crimes. None of these other crimes are hanging offences, though some slapping wouldn’t go astray, and a couple questions probably warrant a whipping. We won’t go into much detail; clarification can be gained by referring to the Exam papers. We also don’t address the Exam as a whole in terms of the adequacy of its coverage of the Applications curriculum, though there are apparently significant issues in this regard.

Question 1, the phrasing is confusing in parts, as was noted by Number 8. It would have been worthwhile for the examiners to explicitly state that the first term Tn corresponds to n = 1. Also, when asking for the first term ( i.e. the first Tn) less than 500, it would have helped to have specifically asked for the corresponding index n (which is naturally obtained as a first step), and then for Tn.

Question 2(b)(ii), it is a little slack to claim that “an allocation of delivery drivers cannot me made yet”.

Question 5 deals with a survey, a table of answers to a yes-or-no question. It grates to have the responses to the question recorded as “agree” or “disagree”. In part (b), students are asked to identify the explanatory variable; the answer, however, depends upon what one is seeking to explain.

Question 6(a) is utterly ridiculous. The choice for the student is either to embark upon a laborious and calculator-free and who-gives-a-damn process of guess-and-check-and-cross-your-fingers, or to solve the travelling salesman problem.

Question 8(b) is clumsily and critically ambiguous, since it is not stated whether the payments are to be made at the beginning or the end of each quarter.

Question 10 involves some pretty clunky modelling. In particular, starting with 400 bacteria in a dish is out by an order of magnitude, or six.

Question 11(d) is worded appallingly. We are told that one of two projects will require an extra three hours to compete. Then we have to choose which project “for the completion time to be at a minimum”. Yes, one can make sense of the question, but it requires a monster of an effort.

Question 14 is fundamentally ambiguous, in the same manner as Question 8(b); it is not indicated whether the repayments are to be made at the beginning or end of each period.

 

That was good fun, especially the slapping. But now it’s time for the main event:

QUESTION 3

Question 3(a) concerns a planar graph with five faces and five vertices, A, B, C, D and E:

What is wrong with this question? As evinced by the graphs pictured above, pretty much everything.

As pointed out by Number 8, Part (i) can only be answered (by Euler’s formula) if the graph is assumed to be connected. In Part (ii), it is weird and it turns out to be seriously misleading to refer to “the” planar graph. Next, the Hamiltonian cycle requested in Part (iii) is only guaranteed to exist if the graph is assumed to be both connected and simple. Finally, in Part (iv) any answer is possible, and the answer is not uniquely determined even if we restrict to simple connected graphs.

It is evident that the entire question is a mess. Most of the question, though not Part (iv), is rescued by assuming that any graph should be connected and simple. There is also no reason, however, why students should feel free or obliged to make that assumption. Moreover, any such reading of 3(a) would implicitly conflict with 3(b), which explicitly refers to a “simple connected graph” three times.

So, how has WA’s Schools Curriculum and Standards Authority subsequently addressed their mess? This is where things get ridiculous, and seriously annoying. The only publicly available document discussing the Exam is the summary report, which is an accomplished exercise in saying nothing. Specifically, this report makes no mention of the many issues with the Exam. More generally, the summary report says little of substance or of interest to anyone, amounting to little more than admin box-ticking.

The first document that addresses Question 3 in detail is the non-public graders’ Marking Key. The Key begins with the declaration that it is “an explicit statement about [sic] what the examining panel expect of candidates when they respond to particular examination items.” [emphasis added].

What, then, are the explicit expectations in the Marking Key for Question 3(a)? In Part (i) Euler’s formula is applied without comment. For Part (ii) a sample graph is drawn, which happens to be simple, connected and semi-Eulerian; no indication is given that other, fundamentally different graphs are also possible. For Part (iii), a Hamiltonian cycle is indicated for the sample graph, with no indication that non-Hamiltonian graphs are also possible. In Part (iv), it is declared that “the” graph is semi-Eulerian, with no indication that the graph may non-Eulerian (even if simple and connected) or Eulerian.

In summary, the Marking Key makes not a single mention of graphs being simple or connected, nor what can happen if they are not. If the writers of the Key were properly aware of these issues they have given no such indication. The Key merely confirms and compounds the errors in the Exam.

Question 3 is also addressed, absurdly, in the non-public Examination Report. The Report notes that Question 3(a) failed to explicitly state “the” graph was assumed to be connected, but that “candidates made this assumption [but not the assumption of simplicity?]; particularly as they were required to determine a Hamiltonian cycle for the graph in part (iii)”. That’s it.

Well, yes, it’s obviously the students’ responsibility to look ahead at later parts of a question to determine what they should assume in earlier parts. Moreover, if they do so, they may, unlike the examiners, make proper and sufficient assumptions. Moreover, they may observe that no such assumptions are sufficient for the final part of the question.

Of course what almost certainly happened is that the students constructed the simplest graph they could, which in the vast majority of cases would have been simple and connected and Hamiltonian. But we simply cannot tell how many students were puzzled, or for how long, or whether they had to start from scratch after drawing a “wrong” graph.

In any case, the presumed fact that most (but not all) students were unaffected does not alter the other facts: that the examiners bollocksed the question; that they then bollocksed the Marking Key; that they then bollocksed the explanation of both. And, that SCSA‘s disingenuous and incompetent ass-covering is conveniently hidden from public view.

The SCSA is not the most dishonest or inept educational authority in Australia, and their Applications Exam is not the worst of 2017. But one has to hand it to them, they’ve given it the old college try.