OK, we should have thought of this earlier. This post is for teachers and students (and fellow travellers) to discuss Methods Exam 1, which was held a few days ago. (There are also posts for Methods Exam 2, Specialist Exam 1 and Specialist Exam 2. We had thought of also putting up posts for Further, but decided to stick to mathematics.) We’ll also update with our brief thoughts in the near future.
Our apologies to those without access to the exam, and unfortunately VCAA is only scheduled to post the 2020 VCE exams sometime in 2023. The VCAA also has a habit of being dickish about copyright (and in general), so we won’t post the exam or reddit-ish links here. If, however, a particular question or two prompts sufficient discussion, we’ll post those questions. And, we might allow (undisplayed) links to the exams stay in the comments.
UPDATE (07/09/21) We’ve finally had a chance to look at the examination report (a Word document, because it’s VCAA). Anything we’ve considered worth remarking upon we’ve included as an update to the associated question, in green.
UPDATE (31/12/20) The exam, and all the exams, are now online. (Thanks to Red Five for the flag.)
UPDATE (21/11/20) The link to the parent complaining about the Methods Exam 1 on 3AW is here. If you see any other media commentary, please note that in a comment (or email me), and we’ll add a link.
UPDATE (23/11/20) OK, we’ve now gone through the first Methods exam quickly but pretty thoroughly, have had thoughts forwarded by commenters Red Five and John Friend, and have pondered the discussion below. Question by question, we didn’t find the exam too bad, although we didn’t look to judge length and coverage of the curriculum. There was a little Magritteishness but we didn’t spot any blatant errors, and the questions in general seemed reasonable enough (given the curriculum, and see here). Here are our brief thoughts on each question, with no warranty for fairness or accuracy. Again, apologies to those without access to the exam.
Q1. Standard and simple differentiation.
Q2. A “production goal” having the probability of requiring an oil change be m/(m+n) … This real-world scenarioising is, of course, idiotic. The intrinsic probability questions being asked are pretty trivial, indeed so trivial and same-ish that we imagine many students will be tricked. It’s not helped by a weird use of “State” in part (a), and a really weird and gratuitous use of “given” in part (b), for a not-conditional probability question.
(07/09/21) On Part (b), the examination report states that
Students generally recognised the conditional probability.
Given there is no conditional probability in the question, we have absolutely no idea what they’re talking about.
Q3. An OK question on the function tan(ax+b). Stating “the graph is continuous” is tone-deaf and, given they’ve drawn the damn thing, a little weird. The information a > 0 and 0 < b < 1 should have been provided when defining the function, not as part of the eventual question. Could someone please send the VCAA guys a copy of Strunk and White, or Fowler, or Gowers, or Dr. Seuss?
Q4. A straight-forward log question.
Q5. For us, the stand-out stupidity. See here.
(07/09/21) The average score on Part (b) was 0.5/2, the examination report noting
[Students] need to be aware that simply quoting a rule or formula is not sufficient; they are required to demonstrate how it is used within the context of the question (i.e. in this case, give evaluations of Pr(X=2) and Pr(X≥1). Many students did not present their answer in the required form.
This smells of lunacy. Putting aside VCAA’s demand of an idiotic and non-unique form for the answer, if a student works through such a question to get a correct answer then they obviously know exactly what they’re doing. Unlike VCAA examiners.
Q6. An OK graphing-integration question, incorporating VCAA’s fetish. Interestingly, solving the proper equation in (b) is, for a change, straight-forward (although presumably the VCAA will still permit students to cheat, and solve instead). As discussed in the comments, the algebra in part (c) is a little heavier than usual, and perhaps unexpected, although hardly ridiculous. The requirement to express the final answer in the form , however, is utterly ridiculous.
(07/09/21) Part (a) requires finding the inverse of the function f(x) = √x/√2, obviously expecting the usual x-y flipping. On this, the examination report notes
Many students left their answer as y = 2x2, thus assuming f(x) was the same as f -1(x), in contradiction to their prior working.
Bullshit. The word the examiners are looking for is “implying”, not “assuming”, and leaving the answer in y-x form implies nothing of the sort. Just more obsessive nastiness.
The average on Part (c) was 1.6/4. Presumably the majority of this was getting lost in (by button-happy Methods standards) the heavy algebra, but the examination report also makes clear VCAA’s culpability:
Errors in … answering in the format specified by the question were common.
Q7. This strikes us as a pretty simple tangents-slopes question, although maybe the style of the question will throw students off. Part (c) is in effect asking, in a convoluted manner, the closest point from the x-axis to a no-intercepts parabola. Framed this way, the question is easy. The convolution, however, combined with the no-intercepts property having only appeared implicitly in a pretty crappy diagram, will probably screw up plenty of students.
(07/09/21) Unsurprisingly, Part (c) was a write-off, with an average score of 0.2/2. It is unclear how much of this was due to the crappy diagram.
Q8. A second integration question featuring VCAA’s fetish. Did we really need two? The implicit hint in part
(c) (d)(i) and the diagram are probably enough to excuse the Magritteness of part (d), but it’s a close call. Much less excusable is part (b) (c):
“Find the area of the region that is bounded by f, the line x = a and the horizontal axis for x in [a,b], where b is the x-intercept of f.”
Forget Dr. Seuss. Someone get them some Ladybird books.
(07/09/21) The average score on Part (c), an easy follow-on from (b), was 0.5/2, the examination report noting
Many students were unsure of which terminals to use for the definite integral …
The examiners failed to mention that the students were probably “unsure” of the terminals because they couldn’t decipher the ridiculous, stream of consciousness question.
Part (d) may as well not have existed, with an average score of 0.2/3. Students may have ran out of time, or have given up.