MAV’s Trials and Tribulations

Yeah, it’s the same joke, but it’s not our fault: if people keep screwing up trials, we’ll keep making “trial” jokes. In this case the trial is MAV‘s Trial Exam 1 for Mathematical Methods. The exam is, indeed, a trial.

Regular readers of this blog will be aware that we’re not exactly a fan of the MAV (and vice versa). The Association has, on occasion, been arrogant, inept, censorious, and demeaningly subservient to the VCAA. The MAV is also regularly extended red carpet invitations to VCAA committees and reviews, and they have somehow weaseled their way into being a member of AMSI. Acting thusly, and treated thusly, the MAV is a legitimate and important target. Nonetheless, we generally prefer to leave the MAV to their silly games and to focus upon the official screwer upperers. But, on occasion, someone throws some of MAV’s nonsense our way, and it is pretty much impossible to ignore; that is the situation here.

As we detail below, MAV’s Methods Trial Exam 1 is shoddy. Most of the questions are unimaginative, unmotivated and poorly written. The overwhelming emphasis is not on testing insight but, rather, on tedious computation towards a who-cares goal, with droning solutions to match. Still, we wouldn’t bother critiquing the exam, except for one question. This question simply must be slammed for the anti-mathematical crap that it is.

The final question, Question 10, of the trial exam concerns the function

\color{blue}\boldsymbol{f(x) =\frac{2}{(x-1)^2}- \frac{20}{9}}

on the domain \boldsymbol{(-\infty,1)}. Part (a) asks students to find \boldsymbol{f^{-1}} and its domain, and part (b) then asks,

Find the coordinates of the point(s) of intersection of the graphs of \color{blue}\boldsymbol{f} and \color{blue}\boldsymbol{f^{-1}}.

Regular readers will know exactly the Hellhole to which this is heading. The solutions begin,

Solve  \color{blue}\boldsymbol{\frac{2}{(x-1)^2}- \frac{20}{9} =x}  for  \color{blue}\boldsymbol{x},

which is suggested without a single accompanying comment, nor even a Magrittesque diagram. It is nonsense.

It was nonsense in 2010 when it appeared on the Methods exam and report, and it was nonsense again in 2011. It was nonsense in 2012 when we slammed it, and it was nonsense again when it reappeared in 2017 and we slammed it again. It is still nonsense, it will always be nonsense and, at this stage, the appearance of the nonsense is jaw-dropping and inexcusable.

It is simply not legitimate to swap the equation \boldsymbol{f(x) = f^{-1}(x)} for \boldsymbol{f(x) = x}, unless a specific argument is provided for the specific function. When valid, that can usually be done. Easily. We laid it all out, and if anybody in power gave a damn then this type of problem could be taught properly and tested properly. But, no.

What were the exam writers thinking? We can only see three possibilities:

a) The writers are too dumb or too ignorant to recognise the problem;

b) The writers recognise the problem but don’t give a damn;

c) The writers recognise the problem and give a damn, but presume that VCAA don’t give a damn.

We have no idea which it is, but we can see no fourth option. Whatever the reason, there is no longer any excuse for this crap. Even if one presumes or knows that VCAA will continue with the moronic, ritualistic testing of this type of problem, there is absolutely no excuse for not also including a clear and proper justification for the solution. None.

What of the rest of the MAV, what of the vetters and the reviewers? Did no one who checked the trial exam flag this nonsense? Or, were they simply overruled by others who were worse-informed but better-connected? What about the MAV Board? Is there anyone at all at the MAV who gives a damn?

*********************

Postscript: For the record, here, briefly, are other irritants from the exam:

Q2. There are infinitely many choices of integers \boldsymbol{a} and \boldsymbol{b} with \boldsymbol{a/\sqrt{b}} equal to the indicated answer of \boldsymbol{-2/\sqrt{3}}.

Q3. This is not, or at least should not be, a Methods question. Integrals of the form \boldsymbol{\int\!\frac{f'}{f}\ }  with \boldsymbol{f} non-linear are not, or at least are not supposed to be, examinable.

Q4. The writers do not appear to know what “hence” means. There are, once again, infinitely many choices of \boldsymbol{a} and \boldsymbol{b}.

Q5. “Appropriate mathematical reasoning” is a pretty fancy title for the trivial application of a (stupid) definition. The choice of the subscripted \boldsymbol{g_1} is needlessly ugly and confusing. Part (c) is fundamentally independent of the boring nitpicking of parts (a) and (b). The writers still don’t appear to know what “hence” means.

Q6. An ugly question, guided by a poorly drawn graph. It is ridiculous to ask for “a rule” in part (a), since one can more directly ask for the coefficients \boldsymbol{a}, \boldsymbol{b} and \boldsymbol{c}.

Q7. A tedious question, which tests very little other than arithmetic. There are, once again, infinitely many forms of the answer.

Q8. The endpoints of the domain for \boldsymbol{\sin x} are needlessly and confusingly excluded. The sole purpose of the question is to provide a painful, Magrittesque method of solving \boldsymbol{\sin x = \tan x}, which can be solved simply and directly.

Q9. A tedious question with little purpose. The factorisation of the cubic can easily be done without resorting to fractions.

Q10. Above. The waste of a precious opportunity to present and to teach mathematical thought.

UPDATE (28/09/20)

John (no) Friend has located an excellent paper by two Singaporean maths ed guys, Ng Wee Leng and Ho Foo Him. Their paper investigates (and justifies) various aspects of solving \boldsymbol{f(x) = f^{-1}(x)}.

Bernoulli Trials and Tribulations

This one feels relatively minor to us. It is, however, a clear own goal from the VCAA, and it is one that has annoyed many Mathematical Methods teachers. So, as a public service, we’re offering a place for teachers to bitch about it.*

One of the standard topics in Methods is the binomial distribution: the probabilities you get when repeatedly performing a hit-or-miss trial. Binomial probability was once a valuable and elegant VCE topic, before it was destroyed by CAS. That, however, is a story is for another time; here, we have smaller fish to fry.

The hits-or-misses of a Binomial distribution are sometimes called Bernoulli trials, and this is how they are referred to in VCE. That is just jargon, and it doesn’t strike us as particularly useful jargon, but it’s ok.** There is also what is referred to as the Bernoulli distribution, where the hit-or-miss is performed exactly once. That is, the Bernoulli distribution is just the n = 1 case of the binomial distribution. Again, just jargon, and close to useless jargon, but still sort of ok. Except it’s not ok.

Neither the VCE study design nor, we’re guessing, any of the VCE textbooks, makes any reference to the Bernoulli distribution. Which is why the special, Plague Year formula sheet listing the Bernoulli distribution has caused such confusion and annoyance:

Now, to be fair, the VCAA were trying to be helpful. It’s a crazy year, with big adjustments on the run, and the formula sheet*** was heavily adapted for the pruned syllabus. But still, why would one think to add a distribution, even a gratuitous one? What the Hell were they thinking?

Does it really matter? Well, yes. If “Bernoulli distribution” is a thing, then students must be prepared for that thing to appear in exam questions; they must be familiar with that jargon. But then, a few weeks after the Plague Year formula sheet appeared, schools were alerted and VCAA’s Plague Year FAQ sheet**** was updated:

This very wordy weaseling is VCAA-speak for “We stuffed up but, in line with long-standing VCAA policy, we refuse to acknowledge we stuffed up”. The story of the big-name teachers who failed to have this issue addressed, and of the little-name teacher who succeeded, is also very interesting. But, it is not our story to tell.

 

*) We extend our standard apology to all precious statisticians for our language.

**) Not close to ok is the studied and foot-shooting refusal of the VCAA and textbooks to use the standard and very useful notation q = 1 – p.

***) Why on Earth do the exams have a formula sheet?

****) The most frequently asked question is, “Why do you guys keep stuffing up?”, but VCAA haven’t gotten around to answering that one yet.

It Doesn’t GEL: The General Error List

This is the home for Further Mathematics (24/11/23 – now called General Maths) exam errors. The guidelines are given on the Methods error post, and there is also a Specialist error post.

*******************

2023 Exam 2 (No Exam yet, discussed here)

Q7(c). The question cannot be properly answered as written. 

Q7(d). There are two solutions, one of which will reportedly not be accepted as correct. 

Q9(d). An extra 1 appeared in the matrix. 

Q11. A poorly written question, with (at least) two correct answers.

Q14(d). There is an extra “of” in the preamble. (24/11/23) As has been point outed in a comment, since this correction was announced at the beginning of the exam, it’s not kosher to list it here as an error.

2023 Exam 1 (No Exam yet, discussed here)

MCQ26 The expression “Q multiplied by P” is absolutely fatal in the context of matrices and, if there is a clear meaning to be had, the meaning is Q x P. The question, however, requires P x Q.

2023 NHT Exam 2 (Exam here, report here (Word, idiots))

Q(3) (added 15/08/23) – The stem plot is incorrectly drawn, which can muck up the calculation of the five-number summary, asked for in part (b).

2022 Exam 2 (Exam here, report here (Word, idiots) – discussed here)

QA(3)(a)(i) (added 27/12/22) – This is not an error yet, but it will be. The issue is whether “Day Number” (for eight consecutive days) is a “numerical variable”: based on their past idiocy, it is clear that VCAA will claim, falsely, that it is not. (15/08/23 – Yep, VCAA screwed up. The report notes, “[the answer] 6 was a common error by students who had chosen ‘day number’ as a numerical variable”.)

2022 Exam 1 (Here, report here (Word, idiots) – discussed here)

QA(1) (added 26/12/22) – A badly borderline question on the shape of a distribution. A really great way to start an exam.

QA(21) (added 26/12/22) – A badly ambiguous question on nominal interest rates. It is unclear whether negative rates and/or (more plausibly) periods greater than a year were to be considered.

2022 NHT Exam 2 (Exam here, report here)

QA(2)(a) (added 27/12/22) – The report indicates that “year” is regarded as a “categorical variable”. This is absurd.

2021 Exam 2 (Exam here, report here (Word, idiots) – discussed here)

QA(1)(f) (added 24/11/21 – discussed here) The question asks for a minimum value to be an outlier, which, by definition of an outlier, cannot exist. (26/12/22  – The answer in the exam report is simply false.)

QB(1)(2)(c) (added 24/11/21 – discussed here) The indicated matrix M2 is not the square of the matrix M provided earlier in the question.

2021 NHT, Exam 2 (Here, and report here)

QA(18) (added 02/11/22) A bad question, hingeing on whether “Event”, listed as “1” or “2”, is a “nominal variable”. The report indicates that it is, which is probably best considered wrong. (Compare QA(2) on 2016 Exam 1, and the report.)

2021 NHT, Exam 1 (Here, and report here)

QA(18) (added 02/11/22) Madness. A multiple choice question on compound interest, for which none of the available answers is correct (or even close). The examination report indicates no answer, simply noting

As a result of psychometric analysis, the question was invalidated.

Some psychometric analysis is probably in order, but VCAA appears to be pointing their psych gun at the wrong target.

2019, Exam 2 (Here, and report here)

QA(1)(a) (added 27/12/22) The question asks which of the two variables in a table is “ordinal”. The report indicates that “day number” (for fifteen consecutive days) is ordinal. Given the other choice was “temperature”, there wasn’t much alternative, But the better answer, and only properly correct answer, is “neither”. The exam report notes “A small number of students answered ‘neither’ “, without indicating whether this answer was deemed correct.

2019, Exam 1 (Here, and report here)

QB(6) (added 21/09/20) The solution requires that a Markov process is involved, although this is not stated, either in the question or in the report.

2018 NHT, Exam 1 (Here, and report here)

MCQ4 (added 23/09/20) The question provides a histogram for a continuous distribution (bird beak sizes), and asks for the “closest” of five listed values to the interquartile range. As the examination report almost acknowledges (presumably in time for the grading), this cannot be determined from the histogram; three of the listed values may be closest, depending upon the precise distribution. The report suggests one of these values as the “best” estimate, but does not rely upon this suggestion. See the comments below.

2017 Exam 2 (Here, and report here)

Q1(c)(ii)  (added 13/11/20) – discussed here. The question is fundamentally nonsense, since there are infinitely many 1 x 3 matrices L that will solve the equation. As well, the 3 x 1 matrix given in the question does not represent the total value of the three products as indicated in Q(c)(i). The examination does not acknowledge either error, but does add irony to the error by whining about students incorrectly answering with a 3 x 1 matrix. (30/10/22) The examination report has finally been amended to acknowledge the obvious error, albeit in a snarky “no harm, no foul” manner. The fundamental nonsense of the question remains unacknowledged. As commenter DB has noted, the examination report also makes the hilarious claim,

The overwhelming majority [of students] answered the question in the manner intended without a problem.

We’re not sure the “minority” of 72% of students who scored 0/1 on the question would agree.

2017 Exam 1 (Here, and report here)

MCQ11  (added 13/11/20) – discussed here. None of the available answers is correct, since seasonal indices can be negative. The examination report does not acknowledge the error.

MCQ6 Module 2 (added 05/09/22) – discussed here. The intention of the question is reasonably clear, but the expression “how many different ways” is, at minimum, clumsily ambiguous, and one can argue for either C or D or E being correct. The intended answer was E, but many also students answered C or D. The examination report suggests that the incorrect answers were due to “simple counting errors”, which is possible but far from definite. The report also writes “Most students answered option B, C or D”, which is contradicted by the statistics; presumably the statistics are correct and the sentence is wrong, but it is unclear.

2015 Exam 1 (Here, and report here)

MCQ9 Module 2 (added 30/09/20) The question refers to cutting a wedge of cheese to make a “similar” wedge of cheese, but the new wedge is not (mathematically) similar. The exam report states that the word “similar” was intended “in its everyday sense” but noted the confusion, albeit in a weasely, “who woulda thought?” manner. A second answer was marked correct, although only after a fight over the issue.

2011 Exam 1 (Here, and report here)

MCQ3  (added 23/11/22). The question asks for the “closest” value for the median, but this cannot be determined from the provided histogram; two of the provided answers (B and C) may be correct. The examination report is silent on the issue.

Hard SEL: The Specialist Error List

This is the home for Specialist Mathematics exam errors. The guidelines are given on the Methods error post, and there is also a Further error post.

UPDATE (02/11/21)

The list is now “complete”, in the sense that it includes all the errors of which we are aware. (We have given the earlier exams only a very, very quick scan.) We will update and correct the list, whenever anything is brought to our attention, and of course when new exams appear.

Continue reading “Hard SEL: The Specialist Error List”

MELting Pot: The Methods Error List

UPDATE (02/11/21)

The list is now “complete”, in the sense that it includes all the errors of which we are aware. (We have given the earlier exams only a very, very quick scan.) We will update and correct the list, whenever anything is brought to our attention, and of course when new exams appear.

**************************************

We’re not really ready to embark upon this post, but it seems best to get it underway ASAP, and have commenters begin making suggestions.

It seems worthwhile to have all the Mathematical Methods exam errors collected in one place: this is to be the place.*

Our plan is to update this post as commenters point out the exam errors, and so slowly (or quickly) we will compile a comprehensive list.

To be as clear as possible, by “error”, we mean a definite mistake, something more directly wrong than pointlessness or poor wording or stupid modelling. The mistake can be intrinsic to the question, or in the solution as indicated in the examination report; examples of the latter could include an insufficient or incomplete solution, or a solution that goes beyond the curriculum. Minor errors are still errors and will be listed.

With each error, we shall also indicate whether the error is (in our opinion) major or minor, and we’ll indicate whether the examination report acknowledges the error, updating as appropriate. Of course there will be judgment calls, and we’re the boss. But, we’ll happily argue the tosses in the comments.

Get to work!

*) Yes, there are also homes for Specialist Mathematics and Further Mathematics errors. Continue reading “MELting Pot: The Methods Error List”

MitPY 9: Team Games

This MitPY is from commenter HollyBolly, who asked on the previous MitPY for some advice on diplomacy.*

Can you guys after all the serious business give me some advice for this situation: on a middle school Pythagoras and trig test, for a not very strong group of students. Questions are to be different from routine ones provided with the textbook subscription. I try “Verify that the triangle with sides (here: some triple, different from 3 4 5) is right, then find all its angles”. After reviewing, the question comes back: “Verify by drawing that a triangle with sides…”

How do you respond if that review has come from:

A. The HoD;

B. A teacher with more years at the school than me but equal in responsibilities in the maths department;

C. A teacher fresh from uni, in their20s.

Regards.

*) Yeah, yeah. We’ll stay right out of the discussion on this one.

MitPY 8: Verification Code

It’s a long time since we’ve had a MitPY. But, the plague goes on (including the plague of right-wing Creightons).

This one comes from frequent commenter Red Five, and we apologise for the huge delay in posting. It is targeted at those familiar with and, more likely, struggling with Victoria’s VCE rituals:

VCAA uses some pretty strange words in exam questions, and the more exam papers I read, especially for Specialist Mathematics 34, the more I can’t get a firm idea of how they distinguish between the meanings of “show that“, “verify that” and “prove that“.

Verify” seems to mean “by substitution”, “show that” seems to mean “given these very specific parameters” and “prove that” seems to be more general, but is it really this simple?

Shuffling NAPLAN’s Deckchairs

We’re late to this, but it’s gotta be done.

Some State education ministers, unhappy with NAPLAN, commissioned a review, which appeared a couple weeks ago. The Review considers many contentious aspects of NAPLAN, but we’ll focus upon “numeracy”, NAPLAN’s homeopathic proxy for mathematics. We’ll leave others to debate “literacy” and the writing tests, and the timing and reporting and so forth.

So, what might the Review entail for the Son-of-NAPLAN testing of mathematics? Bugger all.

Which was always going to happen. For all the endless public and pundit whining about NAPLAN, which is what prompted this latest Review, none of the criticism has been aimed at the two elephants: the Australian Curriculum, which underpins NAPLAN, is a meatless mass of gristle and fat; and “numeracy” is not mathematics, is not arithmetic, and is barely anything. The inevitable consequence is that NAPLAN amounts to the aimless testing of untestable fuzz. As Gertrude Stein would have put it, there is no there there to test.

This misdirection of the Review was locked in by the terms of reference. No mention is made of “mathematics” or “arithmetic”. The single reference in the Terms to “numeracy” is a deadpan call for “the most efficient and effective system for assessing key literacy and numeracy outcomes”, as if this were a clear and unproblematic and worthy goal. It is no surprise, therefore, that the Review gives almost no attention to arithmetic and mathematics, and the meaning(lessness) of numeracy, and indeed works actively to avoid it.

The Review includes a capsule summary of the Numeracy tests, a superficial comparison to PISA and TIMSS, and Australia’s relative performance over time on these tests (pp 34-42). There is no proper exposition, however, of the nature of the tests. There is nothing reflecting the hard fact that NAPLAN and PISA are pseudomathematical garbage. TIMSS, on the other hand, is decidedly not garbage, so what does the Review do with that? That is interesting.

In what could have been a beacon paragraph, the Review compares the Australian Curriculum with expectations on TIMSS:

“… The Australian Curriculum emphasis on knowing and applying is similar to TIMSS but the Australian Curriculum does not appear to cover some of the complexity that is described in the TIMSS framework under reasoning. It seems likely, too, that a substantial number of TIMSS mathematics items are beyond Australian Curriculum expectations for achievement, especially at the Year 4 level.”

In summary, the emphasis on “knowing and applying” mathematics in the Australian Curriculum is just like TIMSS, as long as you don’t really care how much students know, or how deeply they can apply it, or how successful you “expect” them to be at it. Yep, two peas in a pod.

What does the Review then do with this critical paragraph? Nothing. They just drone ahead. Here is the indication that their entire Review is doomed to idiot trivialities, but they can’t see it, or won’t admit it. They see the smoke, note the smoke, but it doesn’t occur to them, or they just can’t be bothered, or it wasn’t in their idiot Terms, to look for the damn gun.

Finally, what of the recommendations proposed by the Review? There are two that concern the testing of numeracy and/or mathematics. The first, Recommendation 2.2, is that authorities

“Rename the numeracy test as mathematics …”

Huh. And what would be the purpose of that? Well, supposedly it would “clarify that [the test] assesses the content and proficiency strands of the Australian Curriculum: Mathematics”. Except, of course, and as the Review itself acknowledges, the Numeracy test doesn’t do anything of the sort. And, even to the minimal extent that it does, it just points back to Elephant Number One, that the Australian Curriculum is not a properly sound basis for anything.

The isolated suggestion to rename a test is of course a distracting triviality. Alas, not all of the Review’s recommendations are so trivial. Recommendation 2.3 proposes a new test, for

“… [the] assessment of critical and creative thinking in science, technology, engineering and mathematics (STEM) …”

Ah, Yes. Let’s test whether ten-year-old Tommy is the new Einstein.

This is a monumentally stupid recommendation. Is Jenny the next Newton? Maybe. But can she manipulate numbers and expressions with sufficient speed and accuracy to hold, let alone mould, a substantial mathematical thought in her head? Just maybe you might want to test for that first? Is Carol the new Capote? Then perhaps first teach her the basics of grammar, first teach her how to construct a clear and correct sentence. Then you can think to tease out all the great works inside her. Is Fritz another Mozart? Gee, I dunno. How are his scales? And on and on.

This constant, idiot call for the teaching of and, worse, the testing of “higher order” thinking, this mindless genuflection to reasoning and creativity, is maddening. It ignores the stubborn fact that deeper thought and creativity in any discipline can only be built upon the craft, upon the basic knowledge and skills of that discipline. The Review’s call is even worse for that, since STEM isn’t a discipline, it’s just a foggy con job.

This Godzilla versus Mothra battle is never likely to end, nor likely to end well. On the one side are the numeracy nuts, who can’t see the value of skills independent of some ridiculous application. On the other side are the creativity clowns, who ludicrously denigrate “the basics”, and ludicrously paint NAPLAN as the basics they’re denigrating. Neither side exhibits any understanding of what the basics are, or their critical importance. Neither side has a clue. Which means, unless and until these two monsters somehow destroy each other, we’re all doomed.

WitCH 43: Period Piece

This one comes courtesy of a smart VCE student, the issue having been flagged to them by a fellow student. It is a multiple choice question from the 2009 Mathematical Methods, Exam 2; the Examination Report indicates, without comment, that the correct answer is D.

UPDATE (08/12/23)

The exam report was amended on 18/09/2020, after this post appeared. The report now includes a note, with no indication that the note was added a decade later:

B was also accepted as it leads to an equivalent expression.

If true, then the original exam report was consciously deceptive.