OK, we should have thought of this earlier. This post is for teachers and students (and fellow travellers) to discuss Methods Exam 1, which was held a few days ago. (There are also posts for Methods Exam 2, Specialist Exam 1 and Specialist Exam 2. We had thought of also putting up posts for Further, but decided to stick to mathematics.) We’ll also update with our brief thoughts in the near future.
Our apologies to those without access to the exam, and unfortunately VCAA is only scheduled to post the 2020 VCE exams sometime in 2023. The VCAA also has a habit of being dickish about copyright (and in general), so we won’t post the exam or reddit-ish links here. If, however, a particular question or two prompts sufficient discussion, we’ll post those questions. And, we might allow (undisplayed) links to the exams stay in the comments.
UPDATE (21/11/20) The link to the parent complaining about the Methods Exam 1 on 3AW is here. If you see any other media commentary, please note that in a comment (or email me), and we’ll add a link.
UPDATE (23/11/20) OK, we’ve now gone through the first Methods exam quickly but pretty thoroughly, have had thoughts forwarded by commenters Red Five and John Friend, and have pondered the discussion below. Question by question, we didn’t find the exam too bad, although we didn’t look to judge length and coverage of the curriculum. There was a little Magritteishness but we didn’t spot any blatant errors, and the questions in general seemed reasonable enough (given the curriculum, and see here). Here are our brief thoughts on each question, with no warranty for fairness or accuracy. Again, apologies to those without access to the exam.
Q1. Standard and simple differentiation.
Q2. A “production goal” having the probability of requiring an oil change be m/(m+n) … This real-world scenarioising is, of course, idiotic. The intrinsic probability questions being asked are pretty trivial, indeed so trivial and same-ish that we imagine many students will be tricked. It’s not helped by a weird use of “State” in part (a), and a really weird and gratuitous use of “given” in part (b), for a not-conditional probability question.
Q3. An OK question on the function tan(ax+b). Stating “the graph is continuous” is tone-deaf and, given they’ve drawn the damn thing, a little weird. The information a > 0 and 0 < b < 1 should have been provided when defining the function, not as part of the eventual question. Could someone please send the VCAA guys a copy of Strunk and White, or Fowler, or Gowers, or Dr. Seuss?
Q6. An OK graphing-integration question, incorporating VCAA’s fetish. Interestingly, solving the proper equation in (b) is, for a change, straight-forward (although presumably the VCAA will still permit students to cheat, and solve instead). As discussed in the comments, the algebra in part (c) is a little heavier than usual, and perhaps unexpected, although hardly ridiculous. The requirement to express the final answer in the form , however, is utterly ridiculous.
Q7. This strikes us as a pretty simple tangents-slopes question, although maybe the style of the question will throw students off. Part (c) is in effect asking, in a convoluted manner, the closest point from the x-axis to a no-intercepts parabola. Framed this way, the question is easy. The convolution, however, combined with the no-intercepts property having only appeared implicitly in a pretty crappy diagram, will probably screw up plenty of students.
Q8. A second integration question featuring VCAA’s fetish. Did we really need two? The implicit hint in part (c) and the diagram are probably enough to excuse the Magritteness of part (d), but it’s a close call. Much less excusable is part (b):
“Find the area of the region that is bounded by f, the line x = a and the horizontal axis for x in [a,b], where b is the x-intercept of f.”
Forget Dr. Seuss. Someone get them some Ladybird books.
Regular readers of this blog will be aware that we’re not exactly a fan of the MAV (and vice versa). The Association has, on occasion, been arrogant, inept, censorious, and demeaningly subservient to the VCAA. The MAV is also regularly extended red carpet invitations to VCAA committees and reviews, and they have somehow weaseled their way into being a member of AMSI. Acting thusly, and treated thusly, the MAV is a legitimate and important target. Nonetheless, we generally prefer to leave the MAV to their silly games and to focus upon the official screwer upperers. But, on occasion, someone throws some of MAV’s nonsense our way, and it is pretty much impossible to ignore; that is the situation here.
As we detail below, MAV’s Methods Trial Exam 1 is shoddy. Most of the questions are unimaginative, unmotivated and poorly written. The overwhelming emphasis is not on testing insight but, rather, on tedious computation towards a who-cares goal, with droning solutions to match. Still, we wouldn’t bother critiquing the exam, except for one question. This question simply must be slammed for the anti-mathematical crap that it is.
The final question, Question 10, of the trial exam concerns the function
on the domain . Part (a) asks students to find and its domain, and part (b) then asks,
Find the coordinates of the point(s) of intersection of the graphs of and .
Regular readers will know exactly the Hellhole to which this is heading. The solutions begin,
Solve for ,
which is suggested without a single accompanying comment, nor even a Magrittesque diagram. It is nonsense.
It was nonsense in 2010 when it appeared on the Methods exam and report, and it was nonsense again in 2011. It was nonsense in 2012 when we slammed it, and it was nonsense again when it reappeared in 2017 and we slammed it again. It is still nonsense, it will always be nonsense and, at this stage, the appearance of the nonsense is jaw-dropping and inexcusable.
It is simply not legitimate to swap the equation for , unless a specific argument is provided for the specific function. When valid, that can usually be done. Easily. We laid it all out, and if anybody in power gave a damn then this type of problem could be taught properly and tested properly. But, no.
What were the exam writers thinking? We can only see three possibilities:
a) The writers are too dumb or too ignorant to recognise the problem;
b) The writers recognise the problem but don’t give a damn;
c) The writers recognise the problem and give a damn, but presume that VCAA don’t give a damn.
We have no idea which it is, but we can see no fourth option. Whatever the reason, there is no longer any excuse for this crap. Even if one presumes or knows that VCAA will continue with the moronic, ritualistic testing of this type of problem, there is absolutely no excuse for not also including a clear and proper justification for the solution. None.
What of the rest of the MAV, what of the vetters and the reviewers? Did no one who checked the trial exam flag this nonsense? Or, were they simply overruled by others who were worse-informed but better-connected? What about the MAV Board? Is there anyone at all at the MAV who gives a damn?
Postscript: For the record, here, briefly, are other irritants from the exam:
Q2. There are infinitely many choices of integers and with equal to the indicated answer of .
Q3. This is not, or at least should not be, a Methods question. Integrals of the form with non-linear are not, or at least are not supposed to be, examinable.
Q4. The writers do not appear to know what “hence” means. There are, once again, infinitely many choices of and .
Q5. “Appropriate mathematical reasoning” is a pretty fancy title for the trivial application of a (stupid) definition. The choice of the subscripted is needlessly ugly and confusing. Part (c) is fundamentally independent of the boring nitpicking of parts (a) and (b). The writers still don’t appear to know what “hence” means.
Q6. An ugly question, guided by a poorly drawn graph. It is ridiculous to ask for “a rule” in part (a), since one can more directly ask for the coefficients , and .
Q7. A tedious question, which tests very little other than arithmetic. There are, once again, infinitely many forms of the answer.
Q8. The endpoints of the domain for are needlessly and confusingly excluded. The sole purpose of the question is to provide a painful, Magrittesque method of solving , which can be solved simply and directly.
Q9. A tedious question with little purpose. The factorisation of the cubic can easily be done without resorting to fractions.
Q10. Above. The waste of a precious opportunity to present and to teach mathematical thought.
John (no) Friend has located an excellent paper by two Singaporean maths ed guys, Ng Wee Leng and Ho Foo Him. Their paper investigates (and justifies) various aspects of solving .
This one feels relatively minor to us. It is, however, a clear own goal from the VCAA, and it is one that has annoyed many Mathematical Methods teachers. So, as a public service, we’re offering a place for teachers to bitch about it.*
One of the standard topics in Methods is the binomial distribution: the probabilities you get when repeatedly performing a hit-or-miss trial. Binomial probability was once a valuable and elegant VCE topic, before it was destroyed by CAS. That, however, is a story is for another time; here, we have smaller fish to fry.
The hits-or-misses of a Binomial distribution are sometimes called Bernoulli trials, and this is how they are referred to in VCE. That is just jargon, and it doesn’t strike us as particularly useful jargon, but it’s ok.** There is also what is referred to as the Bernoulli distribution, where the hit-or-miss is performed exactly once. That is, the Bernoulli distribution is just the n = 1 case of the binomial distribution. Again, just jargon, and close to useless jargon, but still sort of ok. Except it’s not ok.
Neither the VCE study design nor, we’re guessing, any of the VCE textbooks, makes any reference to the Bernoulli distribution. Which is why the special, Plague Year formula sheet listing the Bernoulli distribution has caused such confusion and annoyance:
Now, to be fair, the VCAA were trying to be helpful. It’s a crazy year, with big adjustments on the run, and the formula sheet*** was heavily adapted for the pruned syllabus. But still, why would one think to add a distribution, even a gratuitous one? What the Hell were they thinking?
Does it really matter? Well, yes. If “Bernoulli distribution” is a thing, then students must be prepared for that thing to appear in exam questions; they must be familiar with that jargon. But then, a few weeks after the Plague Year formula sheet appeared, schools were alerted and VCAA’s Plague Year FAQ sheet**** was updated:
This very wordy weaseling is VCAA-speak for “We stuffed up but, in line with long-standing VCAA policy, we refuse to acknowledge we stuffed up”. The story of the big-name teachers who failed to have this issue addressed, and of the little-name teacher who succeeded, is also very interesting. But, it is not our story to tell.
MCQ4 (added 23/09/20) The question provides a histogram for a continuous distribution (bird beak sizes), and asks for the “closest” of five listed values to the interquartile range. As the examination report almost acknowledges (presumably in time for the grading), this cannot be determined from the histogram; three of the listed values may be closest, depending upon the precise distribution. The report suggests one of these values as the “best” estimate, but does not rely upon this suggestion. See the comments below.
Q1(c)(ii) (added 13/11/20) – discussed here. The question is fundamentally nonsense, since there are infinitely many 1 x 3 matrices L that will solve the equation. As well, the 3 x 1 matrix given in the question does not represent the total value of the three products as indicated in Q(c)(i). The examination does not acknowledge either error, but does add irony to the error by whining about students incorrectly answering with a 3 x 1 matrix.
MCQ9 Module 2 (added 30/09/20) The question refers to cutting a wedge of cheese to make a “similar” wedge of cheese, but the new wedge is not (mathematically) similar. The exam report states that the word “similar” was intended “in its everyday sense” but noted the confusion, albeit in a weasely, “who woulda thought?” manner. A second answer was marked correct, although only after a fight over the issue.
Q10(c) (added 13/11/20) – discussed here.The intended solution requires computing a doubly improper integral, which is beyond the scope of the subject. The examination report ducks the issue, by providing only an answer, with no accompanying solution.
Q3(b) (added 13/11/20) – discussed here. The wording of the question is fundamentally flawed, since the “maximum possible proportion” of the function does not exist here, and in any case need not be equal to the “limiting value” of the function. The examination “report” contains nothing but the intended answer.
MCQ20 (added 24/09/20) The notation refers to the forces in the question being asked, and seemingly also in the diagram for the question, but to the magnitudes of these forces in the suggested answers. The examination report doesn’t acknowledge the error.
We’re not really ready to embark upon this post, but it seems best to get it underway ASAP, and have commenters begin making suggestions.
It seems worthwhile to have all the Mathematical Methods exam errors collected in one place: this is to be the place.*
Our plan is to update this post as commenters point out the exam errors, and so slowly (or quickly) we will compile a comprehensive list.
To be as clear as possible, by “error”, we mean a definite mistake, something more directly wrong than pointlessness or poor wording or stupid modelling. The mistake can be intrinsic to the question, or in the solution as indicated in the examination report; examples of the latter could include an insufficient or incomplete solution, or a solution that goes beyond the curriculum. Minor errors are still errors and will be listed.
With each error, we shall also indicate whether the error is (in our opinion) major or minor, and we’ll indicate whether the examination report acknowledges the error, updating as appropriate. Of course there will be judgment calls, and we’re the boss. But, we’ll happily argue the tosses in the comments.
Q9(c), Section B (added 13/11/20) – discussed here. The question contains a fundamentally misleading diagram, and the solution involves the derivative of a function at the endpoint of a closed interval, which is beyond the scope of the course. The examination report is silent on on both issues.
Q3(h), Section B (added 06/10/20) – discussed here. This is the error that convinced us to start this blog. The question concerns a “probability density function”, but with integral unequal to 1. As a consequence, the requested “mean” (part (i)) and “median” (part (ii)) make no definite sense.
There are three natural approaches to defining the “median” for part (ii), leading to three different answers to the requested two decimal places. Initially, the examination report acknowledged the issue, while weasely avoiding direct admission of the fundamental screw-up; answers to the nearest integer were accepted. A subsequent amendment, made over two years later, made the report slightly more honest, although the term “screw-up” still does not appear.
As noted in the comment and update to this post, the “mean” in part (i) is most naturally defined in a manner different to that suggested in the examination report, leading to a different answer. The examination report still fails to acknowledge any issue with part (i).
Q4(c), Section B (added 25/09/20) The solution in the examination report sets up (but doesn’t use) the equation dy/dx = stuff = 0, instead of the correct d/dx(stuff) = 0.
MCQ4 (added 21/09/20) – discussed here. The described function need not satisfy any of the suggested conditions, as discussed here. The underlying issue is the notion of “inflection point”, which was (and is) undefined in the syllabus material. The examination report ignores the issue.
Q7(b) (added 23/09/20)The question asks students to “find p“, where is the probability that a biased coin comes up heads, and where it turns out that . The question is fatally ambiguous, since there is no definitive answer to whether is possible for a “biased coin”.
The examination report answer includes both values of , while also noting “The cancelling out of p was rarely supported; many students incorrectly [sic] assumed that p could not be 0.” The implication, but not the certainty, is that although 0 was intended as a correct answer, students who left out or excluded 0 could receive full marks IF they explicitly “supported” this exclusion.
This is an archetypal example of the examiners stuffing up, refusing to acknowledge their stuff up, and refusing to attempt any proper repair of their stuff up. Entirely unprofessional and utterly disgraceful.