We’ve now gone through the multiple choice component of the exam, and we’ve read the comments below. In brief, and ignoring the screw-ups, most of the questions seemed good, and a number of questions were hard (which is good). We haven’t thought much about the extent to which the questions are trivialised by CAS/Mathematica, although this is of course extremely important; the comments below on this aspect are well worth a careful read.

Here are our question-by-question thoughts:

MCQ1. A decent and non-trivial stationary point question. A pretty mean way to begin.

MCQ2. A contrived and tricky range of function question. A very mean way to continue.

MCQ3. A rather weird piecewise constant acceleration question.

MCQ4. A good and not so easy composition of functions question.

MCQ5. Intrinsically a routine and good complex algebra question, but the presentation is a mess. The notation is introduced, but then plays no role; indeed, the question would have been vastly improved by having the offered answers expressed in terms of and . Requiring some extra algebraic manipulation to obtain the correct answer is needless, and a little contrived.

MCQ9. Complete nonsense, as flagged by commenter Red Five, below. See here.

MCQ10. A routine tank mixture problem.

MCQ11. A screw-up, and perhaps a semi-deliberate one, as flagged by commenter John Friend, below. See here.

MCQ12. A straight-forward but nice Euler’s method problem.

MCQ13. A standard linear dependence problem. As noted by commenter John Friend, the problem is trivial with 3 x 3 determinants, which is not on the syllabus but which is commonly taught for this very purpose.

MCQ14. A straight-forward force component question.

MCQ15. A nice parametrised curve question.

MCQ16. A nice dot product and double angle formula question.

MCQ17. A straight-forward acceleration as a function of distance question.

MCQ18. A straight-forward but nice string tension question.

MCQ19. A cricket ball with a mass of 0.02 kg? Otherwise, a nice change of momentum question.

MCQ20. A straight-forward but nice force and acceleration question.

UPDATE (04/12/2020)

We’ve now gone through Section B (extended question) of the exam, and we’ve read the comments below. There do not appear to be any significant screw-ups, but most of it is pretty poor. In the main, the questions are aimless and badly written, with CAS washing away the potentially good effect of any decent content. Nothing is quite a WitCH or PoSWW, but almost everything is close.

Here are our question-by-question thoughts:

Q1. A strikingly aimless parametrised motion question. Seriously, who gives a shit about any of it? Part (b)(i) asks for dy/dx as a function of t, to “hence” obtain the equation of the tangent at t = π, when it is more natural and simpler to first evaluate dy/dt and dx/dt at π. Then, (b)(ii) asks for the velocity at π, for which you need … This is stupid with a capital stupid.

Q2. An OK complex geometry question, which begins thusly:

Two complex numbers, u and v are defined as and .

Jesus. What’s wrong with “Let and “? The symbols and are pretty crappy choices for fixed complex numbers, and the later choice of for the centre of a circle is really crappy. Part (d), finding the centre and radius of this circle, would be a nice question in a CAS-free world.

Q3. The best question, graphing and then finding the number of inflection points of for . Much of the goodness is killed by CAS. It is not entirely clear what is meant by “asymptotes” in part (b). (See the discussion here.)

Q4. Another parametrised motion question, this one involving a pilot seemingly unaware of the third dimension. Pointless and boring CAS nonsense.

Q5. An absolute mess of a dynamics question. The diagram is shoddy. The appropriate range of the frictional parameter should be given or determined before asking students to compute a Fantasyland acceleration. Part (e), which feels like an afterthought, involves a jarring and needless switch from the algebraic to numeric, with a specific velocity and implausible force plucked from thin air.

We’ve finally gone through the exam, we’ve read the discussion below, and here are our thoughts.

In brief, the exam is OK but no better, and there are issues. There is some decent testing of skills, but the emphasis (as in the Methods 1 exam) appears to be on fiddly computation rather than deeper concepts. That isn’t great for a 1-hour sprint exam, and commenters have suggested the exam was overly long, but of course a 1-hour sprint exam is intrinsically insane. At a deeper level, some of the questions are contrived and aimless, which is standard, but it feels a little worse this year. And, there are screw-ups.

Here are our question-by-question thoughts:

Q1. The kind of pointless and boring mechanics question whose sole purpose is to make mechanics look bad. Part (a) asks students to compute the normal force, but to no end; the normal force is not required for the rest of the question.

Q4. A good inequality inequality question involving absolute values. The question is not difficult but, as commenters have suggested, it seems likely that students will do the question poorly.

Q5. A pretty nice vector resolute (projection) question, sort of a coherent version of last year’s debacle. Part (a) is contrived and flawed by having to choose the integer solution from the two roots of the quadratic; it’s not a hanging offence, but it’s the kind of oddity that would make a thoughtful writer think again.

Q7. An OK if (for a Specialist exam) unusual integration question involving continuity and differentiability of a “hybrid function”. The wording is clumsy, since all that is required is to demand that the function be differentiable; continuity of the function is then automatic, and the demanded continuity of the derivative is irrelevant. Sure, spelling out the continuity may simply be being nice, but including the continuity of the derivative suggests the examiners don’t really get it, or are planning a sleight of hand. We’ll see. Given the most authoritative (Methods) textbook makes a complete hash of this topic, it will be interesting to see if the examination report can get it right. We wouldn’t be betting the house on it.

Q8. An ok but ridiculously contrived volume of revolution question. Asking for the volume to be given in the form where is needless, ill-defined and dumb.

Q9.An OK but ridiculously contrived arclength question. The introduction of the symbol for the arclength is gratuitous and confusing. And (reviews notes), asking for the arclength to be given in the form where is needless, ill-defined and dumb.

UPDATE (21/11/20)A link to a parent complaining about the Methods Exam 2 on 774 is here.

UPDATE (24/11/20 – Corrected) A link to VCAA apparently pleading guilty to a CAS screw-up (from 2010) is here. (Sorry, my goof to not check the link, and thanks to Worm and John Friend.)

UPDATE (05/12/2020)

We’ve now gone through the multiple choice component of the exam, and we’ve read the comments below. In general the questions seemed pretty standard and ok, with way too much CAS and other predictable irritants. A few questions were a bit weird, generally to good effect, although one struck us as off-the-planet weird.

Here are our question-by-question thoughts:

MCQ1. A trivial composition of functions question.

MCQ2. A simple remainder theorem question.

MCQ3. A simple antidifferentiation question, although the 2x under the root sign will probably trick more than a few students.

MCQ4. A routine trig question made ridiculous in the standard manner. Why the hell write the solutions to other than in the form ?

MCQ5. A trivial asymptotes question.

MCQ6. A standard and easy graph of the derivative question.

MCQ7. A nice chain rule question. It’s easy, but we’re guessing plenty of students will screw it up.

MCQ8. A routine and routinely depressing binomial CAS question.

MCQ9. A routine transformation of an integral question. Pretty easy with John Friend’s gaming of the question, or anyway, but these questions seem to cause problems.

MCQ10. An unusual but OK logarithms question. It’s easy, but the non-standardness will probably confuse a number of students.

MCQ11. A standard Z distribution question.

MCQ12. A pretty easy but nice trigonometry and clock hands question.

MCQ13. The mandatory idiotic matrix transformation question, made especially idiotic by the eccentric form of the answers.

MCQ14. Another standard Z distribution question: do we really need two of these? This one has a strangely large number of decimal places in the answers, the last of which appears to be incorrect.

MCQ15. A nice average value of a function question. It can be done very quickly by first raising and then lowering the function by units.

MCQ16. A routine max-min question, which would be nice in a CAS-free world.

MCQ17. A really weird max-min question. The problem is to find the maximum vertical intercept of . It is trivial if one uses the convexity, but that is far from trivial to think of. Presumably some Stupid CAS Trick will also work.

MCQ18. A somewhat tangly range of a function question. A reasonable question, and not hard if you’re guided by a graph, but we suspect students won’t do the question that well.

MCQ19. A peculiar and not very good “probability function” question. In principle the question is trivial, but it’s made difficult by the weirdness, which outweighs the minor point of the question.

MCQ20. All we can think is the writers dropped some acid. See here.

UPDATE (06/12/2020)

And, we’re finally done, thank God. We’ve gone through Section B of the exam and read the comments below, and we’re ready to add our thoughts.

This update will be pretty brief. Section B of Methods Exam 2 is typically the Elephant Man of VCE mathematics, and this year is no exception. The questions are long and painful and aimless and ridiculous and CAS-drenched, just as they always are. There’s not much point in saying anything but “No”.

Here are our question-by-question thoughts:

Q1. What could be a nice question about the region trapped between two functions becomes pointless CAS shit. Finding “the minimum value of the graph of ” is pretty weird wording. The sequence of transformations asked for in (d) is not unique, which is OK, as long as the graders recognise this. (Textbooks seem to typically get this wrong.)

Q2. Yet another fucking trig-shaped river. The subscripts are unnecessary and irritating.

Q3. Ridiculous modelling of delivery companies, with clumsy wording throughout. Jesus, at least give the companies names, so we don’t have to read “rival transport company” ten times. And, yet again with the independence:

“Assume that whether each delivery is on time or earlier is
independent of other deliveries.”

Q4. Aimless trapping of area between a function and line segments.

Q5. The most (only) interesting question, concerning tangents of , but massively glitchy and poorly worded, and it’s still CAS shit. The use of subscripts is needless and irritating. More Fantasyland computation, calculating in part (a), and then considering the existence of in part (b). According to the commenters, part (d)(ii) screws up on a Casio. Part (e) could win the Bulwer-Lytton contest:

“Find the values of for which the graphs of and ,
where exists, are parallel and where “

We have no clue what was intended for part (g), a 1-marker asking students to “find” which values of result in having a tangent at some with -intercept at . We can’t even see the Magritte for this one; is it just intended for students to guess? Part (h) is a needless transformation question, needlessly in matrix form, which is really the perfect way to end.

UPDATE (31/12/20) The exam, and all the exams, are now online. (Thanks to Red Five for the flag.)

OK, we should have thought of this earlier. This post is for teachers and students (and fellow travellers) to discuss Methods Exam 1, which was held a few days ago. (There are also posts for Methods Exam 2, Specialist Exam 1 and Specialist Exam 2. We had thought of also putting up posts for Further, but decided to stick to mathematics.) We’ll also update with our brief thoughts in the near future.

Our apologies to those without access to the exam, and unfortunately VCAA is only scheduled to post the 2020 VCE exams sometime in 2023. The VCAA also has a habit of being dickish about copyright (and in general), so we won’t post the exam or reddit-ish links here. If, however, a particular question or two prompts sufficient discussion, we’ll post those questions. And, we might allow (undisplayed) links to the exams stay in the comments.

UPDATE (21/11/20) The link to the parent complaining about the Methods Exam 1 on 3AW is here. If you see any other media commentary, please note that in a comment (or email me), and we’ll add a link.

UPDATE (23/11/20) OK, we’ve now gone through the first Methods exam quickly but pretty thoroughly, have had thoughts forwarded by commenters Red Five and John Friend, and have pondered the discussion below. Question by question, we didn’t find the exam too bad, although we didn’t look to judge length and coverage of the curriculum. There was a little Magritteishness but we didn’t spot any blatant errors, and the questions in general seemed reasonable enough (given the curriculum, and see here). Here are our brief thoughts on each question, with no warranty for fairness or accuracy. Again, apologies to those without access to the exam.

Q1. Standard and simple differentiation.

Q2. A “production goal” having the probability of requiring an oil change be m/(m+n) … This real-world scenarioising is, of course, idiotic. The intrinsic probability questions being asked are pretty trivial, indeed so trivial and same-ish that we imagine many students will be tricked. It’s not helped by a weird use of “State” in part (a), and a really weird and gratuitous use of “given” in part (b), for a not-conditional probability question.

Q3. An OK question on the function tan(ax+b). Stating “the graph is continuous” is tone-deaf and, given they’ve drawn the damn thing, a little weird. The information a > 0 and 0 < b < 1 should have been provided when defining the function, not as part of the eventual question. Could someone please send the VCAA guys a copy of Strunk and White, or Fowler, or Gowers, or Dr. Seuss?

Q6. An OK graphing-integration question, incorporating VCAA’s fetish. Interestingly, solving the proper equation in (b) is, for a change, straight-forward (although presumably the VCAA will still permit students to cheat, and solve instead). As discussed in the comments, the algebra in part (c) is a little heavier than usual, and perhaps unexpected, although hardly ridiculous. The requirement to express the final answer in the form , however, is utterly ridiculous.

Q7. This strikes us as a pretty simple tangents-slopes question, although maybe the style of the question will throw students off. Part (c) is in effect asking, in a convoluted manner, the closest point from the x-axis to a no-intercepts parabola. Framed this way, the question is easy. The convolution, however, combined with the no-intercepts property having only appeared implicitly in a pretty crappy diagram, will probably screw up plenty of students.

Q8. A second integration question featuring VCAA’s fetish. Did we really need two? The implicit hint in part (c) and the diagram are probably enough to excuse the Magritteness of part (d), but it’s a close call. Much less excusable is part (b):

“Find the area of the region that is bounded by f, the line x = a and the horizontal axis for x in [a,b], where b is the x-intercept of f.”

Forget Dr. Seuss. Someone get them some Ladybird books.

Yeah, it’s the same joke, but it’s not our fault: if people keep screwing up trials, we’ll keep making “trial” jokes. In this case the trial is MAV‘s Trial Exam 1 for Mathematical Methods. The exam is, indeed, a trial.

Regular readers of this blog will be aware that we’re not exactly a fan of the MAV (and vice versa). The Association has, on occasion, been arrogant, inept, censorious, and demeaningly subservient to the VCAA. The MAV is also regularly extended red carpet invitations to VCAA committees and reviews, and they have somehow weaseled their way into being a member of AMSI. Acting thusly, and treated thusly, the MAV is a legitimate and important target. Nonetheless, we generally prefer to leave the MAV to their silly games and to focus upon the official screwer upperers. But, on occasion, someone throws some of MAV’s nonsense our way, and it is pretty much impossible to ignore; that is the situation here.

As we detail below, MAV’s Methods Trial Exam 1 is shoddy. Most of the questions are unimaginative, unmotivated and poorly written. The overwhelming emphasis is not on testing insight but, rather, on tedious computation towards a who-cares goal, with droning solutions to match. Still, we wouldn’t bother critiquing the exam, except for one question. This question simply must be slammed for the anti-mathematical crap that it is.

The final question, Question 10, of the trial exam concerns the function

on the domain . Part (a) asks students to find and its domain, and part (b) then asks,

Find the coordinates of the point(s) of intersection of the graphs of and .

Regular readers will know exactly the Hellhole to which this is heading. The solutions begin,

Solve for ,

which is suggested without a single accompanying comment, nor even a Magrittesque diagram. It is nonsense.

It was nonsense in 2010 when it appeared on the Methods exam and report, and it was nonsense again in 2011. It was nonsense in 2012 when we slammed it, and it was nonsense again when it reappeared in 2017 and we slammed it again. It is still nonsense, it will always be nonsense and, at this stage, the appearance of the nonsense is jaw-dropping and inexcusable.

It is simply not legitimate to swap the equation for , unless a specific argument is provided for the specific function. When valid, that can usually be done. Easily. We laid it all out, and if anybody in power gave a damn then this type of problem could be taught properly and tested properly. But, no.

What were the exam writers thinking? We can only see three possibilities:

a) The writers are too dumb or too ignorant to recognise the problem;

b) The writers recognise the problem but don’t give a damn;

c) The writers recognise the problem and give a damn, but presume that VCAA don’t give a damn.

We have no idea which it is, but we can see no fourth option. Whatever the reason, there is no longer any excuse for this crap. Even if one presumes or knows that VCAA will continue with the moronic, ritualistic testing of this type of problem, there is absolutely no excuse for not also including a clear and proper justification for the solution. None.

What of the rest of the MAV, what of the vetters and the reviewers? Did no one who checked the trial exam flag this nonsense? Or, were they simply overruled by others who were worse-informed but better-connected? What about the MAV Board? Is there anyone at all at the MAV who gives a damn?

*********************

Postscript: For the record, here, briefly, are other irritants from the exam:

Q2. There are infinitely many choices of integers and with equal to the indicated answer of .

Q3. This is not, or at least should not be, a Methods question. Integrals of the form with non-linear are not, or at least are not supposed to be, examinable.

Q4. The writers do not appear to know what “hence” means. There are, once again, infinitely many choices of and .

Q5. “Appropriate mathematical reasoning” is a pretty fancy title for the trivial application of a (stupid) definition. The choice of the subscripted is needlessly ugly and confusing. Part (c) is fundamentally independent of the boring nitpicking of parts (a) and (b). The writers still don’t appear to know what “hence” means.

Q6. An ugly question, guided by a poorly drawn graph. It is ridiculous to ask for “a rule” in part (a), since one can more directly ask for the coefficients , and .

Q7. A tedious question, which tests very little other than arithmetic. There are, once again, infinitely many forms of the answer.

Q8. The endpoints of the domain for are needlessly and confusingly excluded. The sole purpose of the question is to provide a painful, Magrittesque method of solving , which can be solved simply and directly.

Q9. A tedious question with little purpose. The factorisation of the cubic can easily be done without resorting to fractions.

Q10. Above. The waste of a precious opportunity to present and to teach mathematical thought.

UPDATE (28/09/20)

John (no) Friend has located an excellent paper by two Singaporean maths ed guys, Ng Wee Leng and Ho Foo Him. Their paper investigates (and justifies) various aspects of solving .

This one feels relatively minor to us. It is, however, a clear own goal from the VCAA, and it is one that has annoyed many Mathematical Methods teachers. So, as a public service, we’re offering a place for teachers to bitch about it.*

One of the standard topics in Methods is the binomial distribution: the probabilities you get when repeatedly performing a hit-or-miss trial. Binomial probability was once a valuable and elegant VCE topic, before it was destroyed by CAS. That, however, is a story is for another time; here, we have smaller fish to fry.

The hits-or-misses of a Binomial distribution are sometimes called Bernoulli trials, and this is how they are referred to in VCE. That is just jargon, and it doesn’t strike us as particularly useful jargon, but it’s ok.** There is also what is referred to as the Bernoulli distribution, where the hit-or-miss is performed exactly once. That is, the Bernoulli distribution is just the n = 1 case of the binomial distribution. Again, just jargon, and close to useless jargon, but still sort of ok. Except it’s not ok.

Neither the VCE study design nor, we’re guessing, any of the VCE textbooks, makes any reference to the Bernoulli distribution. Which is why the special, Plague Year formula sheet listing the Bernoulli distribution has caused such confusion and annoyance:

Now, to be fair, the VCAA were trying to be helpful. It’s a crazy year, with big adjustments on the run, and the formula sheet*** was heavily adapted for the pruned syllabus. But still, why would one think to add a distribution, even a gratuitous one? What the Hell were they thinking?

Does it really matter? Well, yes. If “Bernoulli distribution” is a thing, then students must be prepared for that thing to appear in exam questions; they must be familiar with that jargon. But then, a few weeks after the Plague Year formula sheet appeared, schools were alerted and VCAA’s Plague Year FAQ sheet**** was updated:

This very wordy weaseling is VCAA-speak for “We stuffed up but, in line with long-standing VCAA policy, we refuse to acknowledge we stuffed up”. The story of the big-name teachers who failed to have this issue addressed, and of the little-name teacher who succeeded, is also very interesting. But, it is not our story to tell.

MCQ4 (added 23/09/20) The question provides a histogram for a continuous distribution (bird beak sizes), and asks for the “closest” of five listed values to the interquartile range. As the examination report almost acknowledges (presumably in time for the grading), this cannot be determined from the histogram; three of the listed values may be closest, depending upon the precise distribution. The report suggests one of these values as the “best” estimate, but does not rely upon this suggestion. See the comments below.

Q1(c)(ii) (added 13/11/20) – discussed here. The question is fundamentally nonsense, since there are infinitely many 1 x 3 matrices L that will solve the equation. As well, the 3 x 1 matrix given in the question does not represent the total value of the three products as indicated in Q(c)(i). The examination does not acknowledge either error, but does add irony to the error by whining about students incorrectly answering with a 3 x 1 matrix.

MCQ11 (added 13/11/20) – discussed here. None of the available answers is correct, since seasonal indices can be negative. The examination report does not acknowledge the error.

MCQ9 Module 2 (added 30/09/20) The question refers to cutting a wedge of cheese to make a “similar” wedge of cheese, but the new wedge is not (mathematically) similar. The exam report states that the word “similar” was intended “in its everyday sense” but noted the confusion, albeit in a weasely, “who woulda thought?” manner. A second answer was marked correct, although only after a fight over the issue.

Q10(c) (added 13/11/20) – discussed here.The intended solution requires computing a doubly improper integral, which is beyond the scope of the subject. The examination report ducks the issue, by providing only an answer, with no accompanying solution.

Q3(b) (added 13/11/20) – discussed here. The wording of the question is fundamentally flawed, since the “maximum possible proportion” of the function does not exist here, and in any case need not be equal to the “limiting value” of the function. The examination “report” contains nothing but the intended answer.

MCQ20 (added 24/09/20) The notation refers to the forces in the question being asked, and seemingly also in the diagram for the question, but to the magnitudes of these forces in the suggested answers. The examination report doesn’t acknowledge the error.

We’re not really ready to embark upon this post, but it seems best to get it underway ASAP, and have commenters begin making suggestions.

It seems worthwhile to have all the Mathematical Methods exam errors collected in one place: this is to be the place.*

Our plan is to update this post as commenters point out the exam errors, and so slowly (or quickly) we will compile a comprehensive list.

To be as clear as possible, by “error”, we mean a definite mistake, something more directly wrong than pointlessness or poor wording or stupid modelling. The mistake can be intrinsic to the question, or in the solution as indicated in the examination report; examples of the latter could include an insufficient or incomplete solution, or a solution that goes beyond the curriculum. Minor errors are still errors and will be listed.

With each error, we shall also indicate whether the error is (in our opinion) major or minor, and we’ll indicate whether the examination report acknowledges the error, updating as appropriate. Of course there will be judgment calls, and we’re the boss. But, we’ll happily argue the tosses in the comments.

Q9(c), Section B (added 13/11/20) – discussed here. The question contains a fundamentally misleading diagram, and the solution involves the derivative of a function at the endpoint of a closed interval, which is beyond the scope of the course. The examination report is silent on on both issues.

Q3(h), Section B (added 06/10/20) – discussed here. This is the error that convinced us to start this blog. The question concerns a “probability density function”, but with integral unequal to 1. As a consequence, the requested “mean” (part (i)) and “median” (part (ii)) make no definite sense.

There are three natural approaches to defining the “median” for part (ii), leading to three different answers to the requested two decimal places. Initially, the examination report acknowledged the issue, while weasely avoiding direct admission of the fundamental screw-up; answers to the nearest integer were accepted. A subsequent amendment, made over two years later, made the report slightly more honest, although the term “screw-up” still does not appear.

As noted in the comment and update to this post, the “mean” in part (i) is most naturally defined in a manner different to that suggested in the examination report, leading to a different answer. The examination report still fails to acknowledge any issue with part (i).

Q4(c), Section B (added 25/09/20) The solution in the examination report sets up (but doesn’t use) the equation dy/dx = stuff = 0, instead of the correct d/dx(stuff) = 0.

MCQ4 (added 21/09/20) – discussed here. The described function need not satisfy any of the suggested conditions, as discussed here. The underlying issue is the notion of “inflection point”, which was (and is) undefined in the syllabus material. The examination report ignores the issue.

Q4, Section 2 (added 23/09/20) The vertex of the parabola is incorrectly labelled (-1,0), instead of (0,-1). The error is not acknowledged in the examination report.

Q7(b) (added 23/09/20)The question asks students to “find p“, where is the probability that a biased coin comes up heads, and where it turns out that . The question is fatally ambiguous, since there is no definitive answer to whether is possible for a “biased coin”.

The examination report answer includes both values of , while also noting “The cancelling out of p was rarely supported; many students incorrectly [sic] assumed that p could not be 0.” The implication, but not the certainty, is that although 0 was intended as a correct answer, students who left out or excluded 0 could receive full marks IF they explicitly “supported” this exclusion.

This is an archetypal example of the examiners stuffing up, refusing to acknowledge their stuff up, and refusing to attempt any proper repair of their stuff up. Entirely unprofessional and utterly disgraceful.

MCQ17 (added 28/09/20) – discussed here. Same as in the 2014 Exam 2, above: the described function need not satisfy any of the suggested conditions, as discussed here.

2007 EXAM 2 (Here, report here and discussed here)

MCQ12 (added 26/09/20) Same as in the 2014 Exam 2, above: the described function need not satisfy any of the suggested conditions, as discussed here.