As for yesterday, we’ve been handballed a copy of the Methods 2 exam, and we haven’t looked at it yet.
Our comments on the exam report are now interlaced below, in green. here. Needless to say, the examiners didn’t confess to, much less apologise for, any of their screw-ups.
The exam report is now available, here.
The exam is now available, here.
And, finally, Section B. It’s pretty bad, as it is every year. All in all, however, this year’s Methods exams seemed better than previous years’. Not good, but good by Methods’ poor standards.
Q1 It takes a special talent to screw up a completely standard box-volume problem. The question starts with an h x 2h sheet, which becomes a 25 x 50 sheet, which then becomes an h x 2h sheet again, and which, finally, becomes an h x h sheet. To which one can only respond, “Sheeeet!”
This re-re-repetition is an insane waste of precious of exam time. And, they screw it up. As commenters have noted, when the box morphs back to have width h we are told
the box’s length is still twice its width (emphasis added).
Which would be an interesting problem but is not what was intended. Commenters have suggested this would not have caused much confusion, but we’re not so convinced. In any case, it is a bad error.
(23/04/22) The exam report fails to even acknowledge the error. Of course. Because they’re gutless flugbongs.
Other than that, the question is pretty much just micro-bite CAS nonsense. There is a slight issue with the domains of the volume functions; presumably open intervals were expected, but mathematicians prefer closed intervals when possible, and usually permit “degenerate” cases at the endpoints.
(23/04/22) Yes, the exam report indicates that open intervals were expected for the domains in parts (b) and (f)(i). This is not wrong, but it is close. Part (g) also included a related, off-key and anal retentive whine, about how the domain “needed to be considered”. These people are not smart. (25/04/22) Having read SRK’s comment on the report, I’ll add two further remarks here.
(1) SRK is correct, that exam report’s solution to 1(g) is incomplete. This makes the report’s whining on this question, that “students should make sure adequate working is given”, pretty ironic.
(2) Thinking about it further I still don’t think it is wrong to consider the domain to be an open interval. But it is definitely wrong to mark the closed interval incorrect. It seems clear from the exam report that this was done; it should not be done again.
Q2 Three distinct questions, the first of which is standard and easy. The second question, Part (f), has the students approximating an integral of a function on the interval [-2,2], but gives a graph of the function over [-3,3]; it’s not clear whether this was intended as a potential trick, but, whatever the intention, it seems pointlessly confusing. The third question, Parts (e) and (f), starts out fine, with calculating the area trapped between y = x2 and y = √x. It then morphs into finding the area trapped between y = ax2 and y = √x and x = a, asking for which values of a this area will equal 1/3. Honestly, who the hell cares?
Q3 A weird combination of two questions. The first question involves the function , which is too cute. Part (b)(ii) asks
Find the equation of the line that is perpendicular to the graph of q when x = -2 and passes through the point (-2,0).
It takes real effort to write that badly.
The second question involves the function . Asking students to “explain why p is not a one-to-one function” is pretty weird wording. The rest of that question seems ok but meandering, and pretty fiddly.
(23/04/22) So, how were students supposed to “explain why” p is not one-to-one? The expected answer was apparently
‘Fails the horizontal line test’ or ‘many-to-one function’ or ‘there exist two x-values for some y-values’.
Jesus. Needless to say, unless trying to hammer some sense into a nitwit examiner, chanting such phrases is meaningless ritual, explaining nothing about anything, and certainly nothing about the specific function p. These people are so, so stupid.
There was also a notable piece of nastiness in part (f), where an answer to three decimal places was required; the correct answer was 0.750, and the report appears to suggest that 0.75 was marked incorrect. Which, technically, it is. But seriously, can’t you micro-minded nitwits arrange for your questions to not nitpick such who-gives-a-stuff nonsense?
Q4 A standard and ok probability question. The final transformation a*f(x/b) of the pdf is ok, but is more naturally presented as a*f(bx), since the latter form results in a = b.
Q5 There are issues. See here.
(23/04/22) For (g), the report indicates 2% of students scored the 1 mark for this question, bluntly gives the answer as -√2, without explanation, and then simply notes
An exact answer was required. −2 was a common incorrect answer.
If VCAA examiners cannot pretend to give a stuff, what possible argument is there for treating them with anything less than complete contempt? They are a disgrace.
Here are our question by question comments on the multiple choice questions. Most of the MCQ seem pretty standard, mostly CAS nonsense, and there seems to be no general consternation, so we’ve only commented if there was something particular to note.
MCQ2 Weird Methodsy phrasing, asking which “graph” is identical, rather than which function.
MCQ4 A potentially tricky max-min question, with the maximum at an endpoint, but presumably students will have just used the stupid machine.
(23/04/22) Nope: with machine or otherwise, only 58% of students answered correctly.
MCQ8 (23/04/22) God knows why only 40% of students got this trivial question correct.
MCQ9 A standard range of a composition question, but the function is not 1-1 and so will presumably trick a lot of students.
(23/04/22) Yep, only 56% of students answered correctly.
MCQ14 A very weird question, asking for which k the functions cos(kx – π/2) and sin(x) have the same average over [0,π]. Some good mathematics underlying the question, but asked in an incomprehensible manner, and anyway killed by the stupid machine.
(23/04/22) 63% of students answered correctly, which seems high. The exam report is silent.
MCQ15 (23/04/22) Only 48% of students got this trivial binomial question correct.
MCQ16 A way over-egged trig question. Fundamentally easy, but we can’t imagine students will do well.
(23/04/22) Yep, a killing field. 31% of students answered correctly.
MCQ18 Magritte garbage. God, I hate Methods. I really, really hate Methods.
(23/04/22) The exam report indicates the Magritte garbage that was expected. 39% of students successfully negotiated the Magritte garbage. Completely insane.
MCQ19 (23/04/22) Dear Examiners, I don’t like you and you don’t like me. But if you actually read my blog on occasion, you might stop making idiots of yourselves so often. Your report’s explanation of this question is utter nonsense. See here. (24/04/22) As John Friend has noted in a comment, the report’s explanation is worse than we had suggested: while demonstrating that they don’t know how to prove a function is differentiable, the examiners have also gratuitously demonstrated that they do not know what “smooth” means. To quickly hammer the point, the examiners’ argument implies that the function x3/x is “smooth” at 0. And if you don’t know why that is false then Congratulations! You’ve qualified yourself to be a VCAA examiner.
MCQ20 A somewhat strange independent probability question. The information and the possible answers are framed in terms of an unknown p, but one can obviously and easily solve for p. This would be a natural first mathematical step, and the step is entirely irrelevant to answering the question.