This is the home for Specialist Mathematics exam errors. The guidelines are given on the Methods error post, and there is also a Further error post.
The list is now “complete”, in the sense that it includes all the errors of which we are aware. (We have given the earlier exams only a very, very quick scan.) We will update and correct the list, whenever anything is brought to our attention, and of course when new exams appear.
2023 SAMPLE EXAM QUESTIONS 2 (exam questions here – discussed here and here)
MCQ1 (added 06/04/23 – discussed here) Flat out wrong. Quantified statements do not have contrapositives.
MCQ2 (added 06/04/23 – discussed here) VCAA fails to follow their own conventions.
2023 SAMPLE EXAM QUESTIONS 1 (exam questions here – discussed here and here)
Q2 (added 06/04/23 – discussed here) A completely screwed induction question.
Q4 (added 06/04/23 – discussed here) A proof by contradiction that should be done directly.
Q6 (added 06/04/23) Rotated curves sweep out surfaces, not solids.
Q7 (added 06/04/23) The exact same issue as Q6.
Q10 (added 06/04/23 – discussed here) A badly flawed logistic population question. Part (a) is unanswerable, and (c) is not asking what VCAA thinks it is asking.
2022 EXAM 2 ((13/12/22- exam here) – discussed here)(06/04/23 – report here (Word, idiots))
MCQ4 (added 10/11/22) – discussed here. There is simply no correct answer. A shocking error, after having made basically the same error the previous year (and refusing to own up to it). (06/04/23) The report simply pretends the error did not occur.
MCQ19 (added 10/11/22) – discussed here. The population mean is given rather than the required sample mean. The question can be done in some mechanical manner, but is fundamentally meaningless and pointless.
QB4(d) (added 10/11/22) The question is badly ambiguous. In asking “how far does the ball travel” it is unclear whether arc length or straight-line distance is required.
QB6(f) (added 10/11/22) – discussed here. A mess, since the required independence of random variables has not been declared. What was intended is unclear, leading to two different answers, both arguably reasonable. (06/04/23) The report simply pretends the error did not occur.
2022 EXAM 1 ((13/12/22- exam here) – discussed here)(06/04/23 – report here (Word, idiots))
Q3(b) (added 11/11/22 – discussed here) The statement on “the mean time taken to dispense 25 cups of coffee” is unambiguous, and simply does not declare what was intended. (06/04/23) The report simply pretends the error did not occur.
Q6(b)(i) (added 10/11/22) There is a pretty serious ambiguity, since the vectors can also be expressed without any reference to y.
Q6(b)(ii) (added 14/12/22) The “vector scalar (dot) product” is not a thing.
Q7 (added 10/11/22) There are multiple answers of the required form.
Q10(b) (added 10/11/22) Asking for the answer in the form (a – √b)/c with a, b and c real is absurd. Presumably it was intended that a, b and c be integers, which would still have been flawed, but somewhat reasonable. (01/04/23) Worse, as has just been pointed out to us, rotating a graph does not create a solid of revolution: it creates a surface of revolution. This issue also appears here.
2022 NHT EXAM 2 (Here, report here – discussed here).
We are not aware of any errors on this exam.
2022 NHT EXAM 1 (Here, report here – discussed here).
Q6 (added 18/10/22) The formula for is incorrect. The subsequent formula for is correct.
Q10(b) (added 01/11/22) In the second line of the examination report solution, the quantity under the square root sign should be squared.
2021 EXAM 2 (Here, and report here – discussed here).
QB2 (added 24/11/21) – discussed here. A mess of a question. Part (a)(i) makes absolutely no sense, since the writers have forgotten that real numbers are also complex numbers. That then leads to Part (a)(ii) having two distinct solutions. (09/05/22) The report belatedly notes that (a)(ii) has two solutions, but simply lies about the solution to (a)(i). Disgraceful.
QB5 (added 24/11/21) – discussed here. Mostly just an appalling question, but it is worth noting that it’s not great to have a car taking off with infinite acceleration. And, it would be very nice if, some day, someone at VCAA learned what “smoothly” means.
QB6 (added 09/05/22) – discussed here. The preamble to part (c) refers to “main daily sales” rather than “mean daily sales”. The error itself is klutzy but no big deal, and the error is noted in the exam report, albeit in an ass-covering “no students were disadvantaged” manner. The problem is, the published exam has fixed the error, but with no record that there was an error. This is simply not what you do. At least not if you have integrity.
2021 EXAM 1 (Here, and report here – discussed here).
Q9 (added 24/11/21) – discussed here. An absolute mess, with errors. The main error is that the domains of the particles are effectively undefined, but the entire question is appalling. (23/04/22) The exam report is silent on the particles perhaps not colliding, and is silent on the infinitely many correct forms of the answer for part (c)(ii).
2021 NHT EXAM 2 (Here, and report here – discussed here).
QB2(b)(ii) (added 21/10/21) There are infinitely many correct answers of the required form.
2021 NHT EXAM 1 (Here, and report here – discussed here).
Q2 (added 21/10/21) The question confuses whether P is the force or the magnitude of that force; if the latter, which seems to be the intention, then P cannot “act horizontally”, etc.
Q3 (added 21/10/21) – discussed here. Concepts outside the syllabus. Whatever VCAA may now wish to claim, the binomial distribution is not part of the Specialist Mathematics curriculum.
Q5(b) (added 21/10/21) The suggested form of the answer is absurd and leaves infinitely many possibilities.
Q7 (added 21/10/21) The quantity v is never defined.
Q9 Q10 (added 21/10/21) – discussed here. A disastrous question, with no correct answer. The solution in the examination report is complete nonsense. (13/10/22) Commenter E has noted that the first line of the exam report’s solution has an error (even on its own terms).
2020 EXAM 2 (Here, and report here – discussed here.)
MCQ2 (added 21/10/21) The suggested approach in the examination report makes no sense.
MCQ9 (added 21/10/21) – discussed here. The question is utterly meaningless.
MCQ11 (added 21/10/21) – discussed here. The question is effectively meaningless. The integral is improper and divergent, meaning that if any answers are considered correct then all of A, C and D should be. The examination report does not acknowledge the issue.
QB3(e)(ii) (added 21/10/21) – discussed here. The question is absurd. The examination report gives absolutely no clue how to go about answering the question.
2020 EXAM 1 (Here, and report here – discussed here).
Q1 (added 02/02/22) – discussed here and here. 1(b) is a mess as worded, and as graded, since the expectation is for students to treat the acceleration as if it were a scalar. In particular, a non-sensical remark in the examination report strongly suggests that students who gave the negative of the report’s answer were invalidly marked down.
Q2 (added 21/10/21) – discussed here. The required form of the answer is meaningless.
Q6 (added 21/10/21) – discussed here and here. A mess. The examination report indicates that in (a) students “needed to demonstrate the use of the chain rule”, which the report’s solution does not do. The “hence” beginning (b) is meaningless and consequently misleading.
Q7(a) (added 21/10/21) The examination report provides no proper indication of what is required for the given function to be continuously differentiable. See the discussion here.
Q8 (added 21/10/21) The required form of the answer is meaningless.
Q9(b) (added 21/10/21) The required form of the answer leaves infinitely many possibilities.
2019 EXAM 2 (Here, and report here)
MCQ12 (added 22/10/21) – discussed here. The question is completely meaningless (and intrinsically absurd), involving a vector with a projection of larger magnitude than that of the original vector. The examination report foolishly and dishonestly and cowardly pretends there is some sense in the question. Utterly disgraceful.
QB(1) (added 01/11/21) – discussed here. A confused question, which simply presumes, and expects students to assume, that the relation given in (b) is restricted by the parametrisation given in (a). There is no reason to to assume that, making the answer to (b) in the examination report simply, and arrogantly, wrong. Similarly, (e) as worded is meaningless.
QB(5)(c)(i) (added 012/11/21) – discussed here. The solution in the examination report fails to consider the possibility that m2 > m1 (in which case the angle theta is irrelevant).
QB(6)(f) (added 02/11/21) – discussed here. In itself the question is ok. The issue is, for the corresponding question 6(e) on the 2018 exam, both rounding up and (incorrect) rounding down were accepted, without a word of explanation or warning in the report about future grading policy.
2019 EXAM 1 (Here, and report here)
Q9(b) (added 22/10/21) The question involves tension in a ring, which needed to be assumed equal on both sides of the ring. This is a (once upon a time) standard assumption, but apparently many students did not make the assumption. The question states that “The tension in the string has a constant magnitude”, which is further confusing rather than clarifying. The examination report refuses to acknowledge the confusion, sneakily suggesting that “the statement of the question” indicated equal tension on the two sides of the ring; this is dishonest and cowardly.
2019 NHT EXAM 2 (Here, and report here)
MCQ 17 (added 01/11/22) The question asks for the maximum height of a thrown ball, but does not specify the initial height of the ball, when thrown. This should have been specified, or the question should have asked for the vertical displacement.
QB(2)(a)(iii) (added 16/11/21) Similar to QB(1)(b) from 2017 Exam 2, below, students are required to graph a function “from x = -6 to x = 6”, and are instructed to “label the asymptotes”. The graph in the examination report goes beyond the specified domain, which, inadvertently, pinpoints the issue; specifying a specific finite domain precludes the possibility of horizontal asymptotes.
2019 NHT EXAM 1 (Here, and report here)
Q2(b) (added 22/10/21) – discussed here. The question asks for an answer to 2 decimal places, but the precise answer (using the standard approximation) is 0.025. The examination report states, without explanation, that both 0.02 and 0.03 were accepted as correct.
2018 EXAM 2 (Here, and report here)
MCQ3 (added 24/10/21) – discussed here. A badly flawed, and nasty, question. Arguably, there is no correct answer. The comment in the examination report is largely incomprehensible.
QB(3)(f) (added 24/10/21) – discussed here. The question includes concepts outside the syllabus, and the solution in the examination report is incomplete.
QB4(e) (added 24/10/21). The expression “period of time” was slightly ambiguous. It would appear that both the length of time and the time interval were accepted.
QB(6)(a) (added 24/10/21) – discussed here. The examination report implies that a one-tailed test is appropriate in the given scenario. That is far from clear and is better considered false.
2018 EXAM 1 (Here, and report here)
Q3 (added 24/10/21) There are infinitely many answers of the correct form.
Q6(b) (added 24/10/21) – discussed here and here. The exam incorrectly asks for a change in momentum in the units kg ms-2. The examination report indicates that a correct calculation of the rate of change of momentum also received full marks. There is not a single word of acknowledgment that, much less apologising for, VCAA having screwed up.
Q8(b) (added 24/10/21) There are infinitely many answers of the correct form.
Q10 (added 24/10/21) – discussed here. There are infinitely many correct answers, and a separate ambiguity. The question is fundamentally flawed, and is simply not asking what VCAA thinks it is asking.
2018 NHT EXAM 2 (Here, and report here)
QB(1)(d) (added 24/10/21) – discussed here. The intended approach is valid but very difficult to justify, and is way, way beyond the scope of VCE. The solution involves evaluating an improper integral, which is beyond the scope of VCE, and which is handled poorly by at least one of the standard CAS machines.
2018 NHT EXAM 1 (Here, and report here)
Q8(c) (added 24/10/21) The intention was to ask for all rays of the form Arg(z) = α that are perpendicular to a given circle. Instead, the question asked for “the equations of all rays that are perpendicular to the circle in the form Arg(z) = α”. These are not the same.
2017 EXAM 2 (Here, and report here)
MCQ10 (added 26/10/21) – discussed here. The question is fundamentally flawed (and is appalling). In particular, depending upon one’s notion of inflection point (which is not defined in the syllabus), it is possible that there are inflection points where f = 0. The examination report is missing a minus sign.
QB(1)(b) (added 15/10/21). The question instructs students to “Sketch the graph of f(x) = x/(1+x3) from x = -3 to x = 3″. Notwithstanding the finite domain, the examination report indicates an asymptote y = 0. There is some ambiguity since, of course, any sketch will be over a finite domain, meaning the indication of an asymptote must be more suggestive than accurate. Nonetheless, the exam instruction to graph the function over a specific finite domain precludes any possibility of a horizontal asymptote. The examination report is clearly in error, and if students were penalised for not having included a horizontal asymptote then this was also an error. (16/10/21)
A lesser issue is that the examination report indicates coordinates of points on the graph to decimal places; this follows on from part (a), but nonetheless violates VCAA’s direction that answers should be exact unless otherwise specified.
QB(4) (added 26/10/21) – discussed here. For (b), the examination report appears to demand an absurd amount of working for the trivial solving of a quadratic equation, and falsely claims that substituting is in invalid method to “show” solutions of an equation. Part (c) is meaningless, and the examination report arrogantly blames the students for not discerning the non-existent meaning. Part (f) is similarly meaningless, but is worse. In sum, an appalling question.
QB(5) (added 26/10/21) – discussed here. The question is a bit of a mess, but it is not clear that there is an error as such. One of the standard machines apparently struggles with (c)(ii). The examination report has an extra √ sign for some reason. (30/10/22) Yes, there is an error. As John Friend has pointed out below, there are two values, , which work, giving two different starting times and starting positions. (The second solution arises from taking ). The exam question indicates the collision occurs “shortly after starting”, which might be used to argue for the earlier of the two starting times, but it’s not enough. There are two entirely separate solutions, each with its own first time of collision reasonably described as “shortly after starting”.
2017 EXAM 1 (Here, and report here)
Q2 (added 26/10/21). There are infinitely many answers of the correct form.
Q8(b) (added 26/10/21). There are infinitely many answers of the correct form.
Q10(c) (added 13/11/20) – discussed here. The intended solution requires computing a doubly improper integral, which is beyond the scope of the subject. The examination report ducks the issue, by providing only an answer, with no accompanying solution.
2017 NHT EXAM 2 (Here, and report here)
Q3(b) (added 13/11/20) – discussed here. The wording of the question is fundamentally flawed, since the “maximum possible proportion” of the function does not exist here, and in any case need not be equal to the “limiting value” of the function. The examination “report” contains nothing but the intended answer.
2017 NHT EXAM 1 (Here, and report here)
Q3 (added 26/10/21) The required form of the answer is absurd, and there are infinitely many answers of this form.
2016 EXAM 2 (Here, and report here)
MCQ10 (added 26/10/21) – discussed here. The question is clunky and absurd, and there is no correct answer.
QB(1)(e) (added 26/10/21). The question is poorly written, so there are infinitely many correct answers. (Compare the question discussed here.)
2016 EXAM 1 (Here, and report here)
We are not aware of any errors on this exam.
2015 EXAM 2 (Here, and report here)
QB(3)(b) (added 26/10/21) The answer is required in an absurd form, and there are infinitely many answers of that form.
2015 EXAM 1 (Here, and report here)
Q2 (added 26/10/21) The answers required students to use g = 9.8, rather than g = g, for God only knows what reason.
2014 EXAM 2 (Here, and report here)
QB(1)(b) (added 26/10/21) Similar to 2017 and 2019 NHT, once a finite domain has been specified, it is meaningless to talk about horizontal asymptotes.
2014 EXAM 1 (Here, and report here)
Q3(b) (added 26/10/21) The “given that” is meaningless.
Q5(b) (added 27/10/21) – discussed here. The question is completely meaningless. The substitution u = x, for example, would perfectly satisfy the parameters of the question.
2013 EXAM 2 (Here, and report here)
MCQ6 (added 27/10/21) – discussed here. The question is purely and simply stuffed. The original examination report indicated that all students were awarded the mark but without indicating the error. That has been rectified, although the tense and tone is not exactly overflowing with remorse.
QB(3)(e) (added 27/10/21). A slightly peculiar graphing question, where no particular points were required to be identified. Reportedly, a number of different types of answers were accepted.
2013 EXAM 1 (Here, and report here)
Q6 (added 27/10/21) – discussed here. The question is very poorly formed, and is best thought of as wrong.
2012 EXAM 2 (Here, and report here)
We are not aware of any errors on this exam.
2012 EXAM 1 (Here, and report here)
Q9(b) (added 27/10/21) There are infinitely many answers of the required form.
Q10(b) (added 27/10/21) There are infinitely many answers of the required form.
2011 EXAM 2 (Here, and report here)
We are not aware of any errors on this exam.
2011 EXAM 1 (Here, and report here)
Q4 (added 27/10/21) The suggested form of the answer is weird (and unnecessary). Presumably, the intention was to specify that k be rational, rather than real.
2010 EXAM 2 (Here, and report here)
MCQ11 (added 29/10/21) – discussed here. An absolutely ridiculous, meaningless question.
QB(3)(e) (added 29/10/21) No degree of accuracy was required, which led to weird answers and subsequent answers being accepted.
QB(4) (added 29/10/21) – discussed here. A weird question, the sole purpose of which seems to be to test whether students recognised that . (They didn’t.) Part (c) is meaningless as written, and the solution in the examination report is fundamentally invalid, even for the intended meaning. The use of the (essentially meaningless) term “hybrid function” in (d) and (e) is weird and pointless.
QB(5)(b) (added 29/10/21) There are infinitely many answers of the required form.
2010 EXAM 1 (Here, and report here)
Q10 (added 27/10/21) The suggested form of the answer is weird, and there are infinitely many answers of that form.
2009 EXAM 2 (Here, and report here)
MCQ6 (added 29/10/21) – discussed here. Simply screwed. There is no correct answer. The examination report is silent.
2009 EXAM 1 (Here, and report here)
Q10(b) (added 29/10/21) There are infinitely many answers of the required form.
2008 EXAM 2 (Here, and report here)
QB(1) (added 30/10/21) Part (b)(i) is weirdly phrased, and is not asking what the examiners think it is asking. There are, for example, infinitely many cubics that correctly answer the question. Similarly, (d)(ii) is not asking what is intended, with infinitely many correct answers.
2008 EXAM 1 (Here, and report here)
Q4 (added 30/10/21) The required form is absurd, and there are infinitely many answers of the required form.
2007 EXAM 2 (Here, and report here)
MCQ14 (added 01/11/21) A poorly worded and ill-posed question, asking for a differential equation for which “the” solution “models” a population scenario. The question should have asked for the appropriate initial value problem.
QB(1)(d) (added 01/11/21) A fundamentally meaningless question, analogous to B(4)(c) on the 2017 exam, discussed above and here. Interestingly, students performed significantly better on the 2007 exam question, suggesting that the required ritual was reasonably well known in 2007 but had been forgotten by 2017.
2007 EXAM 1 (Here, and report here)
Q5(b) (added 31/10/21) There are infinitely many answers of the required form.
Q5(b) (added 31/10/21) The required form is absurd, and there are infinitely many answers of the required form.
2006 EXAM 2 (Here, and report here)
MCQ20 (added 24/09/20) The notation refers to the forces in the question being asked, and seemingly also in the diagram for the question, but to the magnitudes of these forces in the suggested answers. The examination report doesn’t acknowledge the error.
2006 EXAM 1 (Here, and report here)
Q4(b) (added 24/09/20) There are infinitely many answers of the required form.
63 Replies to “Hard SEL: The Specialist Error List”
OK, I’ll have a go with 2006 Multiple Choice Q17.
85% of students gave the intended answer of B and I agree that B is mostly correct, although I would prefer the question to say the *acute* angle or something similar just to avoid ambiguity (if this were a paper 1, would students be marked wrong for writing 315 degrees?)
Also, is the assumption that the vectors are tails together when the angle is measured?
Possibly not a “mistake” in the true sense, but the VCAA-induced pedant in me is looking for these things a lot more now.
Hmm. Good question. Obviously not a hanging offence. (Last year’s projection question was a hanging offence.) It’s a matter of convention: what does “angle between vectors” mean in Specialist?
I can’t recall if I’ve been asked this by a student, but the way I think of it: in Specialist we think of vectors as arrows free to move around in space, so first we move both vectors so their tails are at the same point, and then the size of the angle between them is the smallest anti-clockwise rotation required to superimpose one of them upon the other.
And normally I would agree, but this is VCAA.
Because it is a multiple-choice question, there probably is not a need, since there is only one correct answer given.
But if it were not multiple choice, I would have liked some more guidance.
Thanks, SRK. Definitely one has to think of vectors with tails at the same point. The only question is whether “angle” between v and w automatically means “acute angle”.
I would like some statement somewhere to say that “angle” refers to the non-reflex angle in the absence of the adjective “reflex”.
Specifying acute or obtuse in some cases may be considered giving too much of a hint.
OK, back to the 2007 Papers. SM Paper 1s seem (for the most part) error-free. It is the Multiple choice that I feel may be the major source of errors, with more than one correct answer being the most common “error”.
Sure. again, it’s not a hanging offence, but if VCE convention suggests they should have specified acute, then they should have.
I would also question whether VCAA would mark as wrong all of the following answers since there are no modulus signs in use and the vectors are clearly all different…
Again, pedantic, so possibly not “wrong” as such.
Again not a hanging offence, but it looks a good slap is not out of the question. What year?
Same year, same paper, different page.
Thanks, I’ll look carefully.
Yep, it’s an error.
OK, here is another one that I’m not sure about (the question is fine, it is the report that I don’t like):
SM 2006 Paper 1 Q5a (4 marks): Show that
Examiner’s report: “Quite a few students whose working was correct failed to complete their solution, giving no proper explanation as to why the negative answer should be rejected, or not mentioning the negative answer at all.”
Now… seriously??? so surely it is obvious that is positive…???
OK, if you use a half angle formula there is a *possibility* that the half angle will have a different sign, but not for angles in quadrant 1.
Jesus, this gets old.
RF, it’s not an “error” by my reckoning, but it is moronic. Yet another example of a decent question turned to trash by the “show that” formation.
OK, so I agree with you in the sense that the question itself is fine.
The way the question was marked (according to the report) I feel is wrong, or at least hideously unfair to students who would look at this and say “well, of course it is positive” and move on with the exam.
It’s a judgment call, and I’ll think more about it.
Obviously the intrinsic problem is fine. The question is, what is reasonable to expect as answer given they asked the problem in ‘Show that’ form. The best answer to that is “Don’t fucking asking stupid fucking ‘show that’ questions, you dumb shits”. But, they did.
Given they did, there is the reality that more than a few students will have bluffed some of the quadratic work.
Agreed. There was no need for it to be a ‘Show’ or ‘Prove’ question. It would have been great as a simple ‘Find’ question.
And if you’re worried about students not explicitly rejecting extraneous answers, it could have asked to find the value of tan(7pi/8). Yes, a bit trickier because the half-angle is not as obvious, but surely tan(7pi/8) = -tan(pi/8) is not asking too much of a student …?
Part (b) uses the value of tan(pi/8), but in a totally gratuitous way. Part (b) works just as well if tan(pi/8) is replaced with any other comparable value.
Is it a VCAA requirement to have a direct link between the two parts of a two part Exam 1 question? Surely a link by topic or theme is sufficient? The ill-conceived motivation to link the two parts via the value of tan(pi/8) is what wrecked what could have been a good part (a) question.
Compare, from the same year, Exam 2, Section B, 5(b), which is more or less the same question (but with cos rather than tan), but in this case students are explicitly told to explain why any values are rejected.
See also 2009 Exam 2, Section B, 4(c) for this issue in a non-trig context.
Agreed this is not an error, but it is definitely mixed messaging for students / teachers.
Re: 2009 Exam 2, Section B, 4(c).
I dislike the “giving reasons for rejecting any solutions.” prompt in this question. The extraneous solution in this case is less obvious than it is for the tan(pi/8) question – but it’s obvious enough and I’d want students, particularly students, to recognise the existence of extraneous solutions without prompting. Such recognition and rejection prompting is what’s worth the 1 mark in my book. And surely part (d)(ii) contains a subtle prompt …. (negative volumes, anyone?)
But I totally agree with the mixed messaging.
OK, I’ll toss my hat in the ring with something that still makes my blood boil 12 years later.
2008 Exam 1 Q7: The question asks for the exact value of F. No required form for this value is given.
Using Lami’s Theorem you get the exact value of .
This answer was NOT accepted. Only the exact surd value of was accepted
(which you get by either:
1) resolving forces, or
2) spending another 6 minutes using a compound angle formula to get the value of in surd form, substitute into and then simplify)
(I know this because a little birdy told me).
The Examination Report does not mention any of this, except to make the following snide comment: “Those who correctly used [Lami’s Theorem] usually could not go on to find F”.
I have no problem with requiring an exact *surd* value – but it’s an error of omission (and dishonest) not to declare this in the question. And it’s an error of omission (and deceitful) not to comment on this in the Report. Either *completely* specify the required form (exact surd value), OR use angles where this issue won’t occur. The stupid thing is that if this question had been on Exam 2, there would be no issue.
(It’s the same with Specialist questions that want equations of lines, don’t ask for any particular form, but then only accept an answer given in the form y = mx + c).
Not accepting is crazy on two fronts:
1. It IS an exact answer.
2. Lami’s theorem is so often the more efficient way of solving problems with triangle of forces that to not allow it is penalising students for being efficient and seems totally wrong.
RF, I hate to play assholes’ advocate here, but the fact that a certain technique is sometimes or commonly more efficient/successful doesn’t mean it always is. The question is to what extent, and how, should exam questions be designed to have students handle such issues.
I’ll have to think about this one. The expression “exact value” is more problematic that is commonly understood; the in , for example, is cloaking our general inability to deal with real numbers in an “exact” manner. It’s not really a value, let alone an exact value. And, how can a value be “inexact”? Nonetheless, feels less answerish to me.
@Marty: I agree that feels “less answerish”. Nevertheless the issue is whether or not it answers the question that was asked and, if not, on what grounds does it fail. I wonder whether would have been accepted (and if not, why not).
@RF: I totally agree. I don’t move in the ‘right circles’ but every now and then I do meet little-names who know things. It’s a total disgrace that you can only find these things out by accident. The default required form of a line is one that really gets up my nose – you won’t find it written anywhere in a Report.
In the Lami’s Theorem case, info from a little-name led to me chasing a big-name down a walkway at a 2008 conference, yelling for an explanation (big-name sought refuge in a packed lecture theatre and unfortunately the talk taking place was not about the social benefits of gladiatorial contests). But apparently we’re meant to know that ‘hybrid’ answers are not acceptable (whatever the hell a hybrid answer is).
Nowadays I tell my students to only consider using Lami’s Theorem
1) in Exam 2, or
2) if they can see (during reading time in Exam 1) that only special angles are involved. So valid methods get held hostage by VCAA-idiocy.
Otherwise resolve the forces.
@RF (again): I totally agree that using efficient methods should not be penalised if they give ‘less preferred’ answers. It’s up to the writer to make sure things like this don’t happen. In reality, is any less practical/useful/meaningful than ?
JF, I’d make the tu quoque response, is any less practical/useful/meaningful than ?
Personally, I’m not too bothered by this one. Using compound angle formulae to calculate trig ratios of multiples of is something that I’d expect all Specialist students to be familiar with from Year 11, and then return to in a variety of contexts in Year 12 – circular functions, complex numbers, vectors, dynamics. One could even throw it in when calculating a definite integral and angle between tangents. (It’s a bit like , you know the students will forget, so regular spaced reinforcement is required). So while these don’t quite have the hallowed status of the 30-45-60 values, I think it’s fair game for Specialist. I also don’t think this view overgeneralises, since can be calculated in one line from a single use of a compound angle formulae from the “special” angles, unlike more recherche cases like .
None of this is to disagree with the broader point about the lack of clarity and transparency from VCAA about is considered an acceptable form of a final answer, when none is specified.
Also, putting aside the merits of this question as an *exam* question, I did find it an instructive example to go through with my students, just on this point of how to decide between resolving forces into rectangular components or using sine / cosine rule.
Hi SRK. My only objection to is that there’s probably some arcane VCAA convention known to maybe 12 people in the world that requires trig(special angle) to be simplified:
For many years many of us knew anecdotally (thanks to little birdies) that VCAA did not accept numerically correct answers such as 0.23/0.47. It had to be 23/47. Only in the last few years has VCAA deigned to include this important information in a Report. So who knows what other arcane VCAA bullshit lore is out there.
Your proposed answer answers the question so it has to be accepted. Or an explanation given as to why it’s not accepted.
Now here’s the problem – there are clear and numerous precedents where VCAA explicitly prescribe/micro-manage the form in which they want an answer. OK, I’m fine with that. But when this doesn’t happen in a question, there’s a very reasonable expectation that any reasonable form (within the parameters specified by the Reports) is acceptable. Except this is not what happens. It gets decided behind closed doors that only one form of answer is acceptable and all other forms are wrong. In the question under discussion, some idiot retrospectively decided that only exact *surd* form was acceptable, and didn’t have the guts to explain in the Report *why* the simple ‘hybrid’ form obtained from Lami’s Theorem was wrong.
All of the above applies to:
1) Questions where the equation of a line is the answer. Even when it’s not stated in the question, only the form y = mx + c is accepted, apparently. How is the average teacher expected to know this? From the Report, you would hope. Nope. Unless they’re an assessor or meet a little birdy, they won’t know and will blithely think the slope-point form is OK.
2) Questions where g appears in the answer. In the absence of an instruction, when to substitute g = 9.8 and arithmetically simplify and when not to …? Surely both should be accepted, particularly when the exam DEFINES g = 9.8. Except sometimes they’re not both acceptable. It should state in the bloody question!
VCAA has shown on numerous occasions that it can be a petty pedantic prick. But it never seems to be pedantic for important things like specification of syllabus in a Study Design, exam questions, Reports …
This is wrong, unfair, unjust and stupid.
…and another thing that annoys me, while we are on the topic…
All these “a little bird told me” snippets are really useful and very interesting, but it does sort of imply that unless a teacher moves in the right circles (MAV may suggest that their “meet the assessors” sessions are the right circles, but recent evidence on this is… inconclusive) they do not learn these very valuable insights and their future students suffer as a result.
I have no doubt it happens in a lot of subjects, but is it FAIR?
Of course it’s not fair.
The broader and ultimate unfairness is that VCAA is not transparent in how the exams are marked. I do not understand how VCAA gets away with not making the marking scheme available.
If the MAV did not have such an unhealthy incestuous relationship with the VCAA, it could be a genuine voice of Victorian Mathematics Teachers and the marking scheme be made publicly available.
Could this be an error – NHT 2018 Exam 1, Q8c
Any ray passing through the centre will be perpendicular. (Or does “in the form Arg(z)=a” imply rays starting from the origin?)
It is my understanding that means the ray starts at the origin, whereas means the ray starts at
Again, though, I have never seen this formalised in a VCAA report or curriculum document.
A couple of clarifications:
1) Marty’s comment (below) could be misconstrued as implying a definition.
“Arg(z) always refers to the (appropriate) angle that the line from O through z makes with the positive real axis.”
is a mathematical fact. Consider Arg. So you want values of z that have a principal argument of . Clearly they lie on the line y = x with x > 0, noting that if x = 0 is on this line then z = 0 and Arg(0) is not defined, so this value is not included and you have a ‘hole’ at the origin. In other words, the part of the line from (but not including) O through z that makes an angle with the positive real axis.
In a similar way, Arg defines values of z lying on the line y = -x with x > 0. In other words, the part of the line from (but not including) O through z that makes an angle with the positive real axis. Note that the negative in means clockwise form the positive x-axis. Pick any value of z lying on this ‘half-line’ (that is, ray) and it will have a principal argument of .
2) For Arg, the value of z such that is the terminus (starting point) of the ray but is NOT included (open circle) because at that point you have Arg(0) which is undefined.
Hi, BWS. The question is fine (and nice, but with poor sentence construction). Arg(z) always refers to the (appropriate) angle that the line from O through z makes with the positive real axis.
Thank you both for the clarification.
2017 Specialist Maths Exam 2 Q1:
Part (b) asks students to draw a graph from x = -3 to x = 3 but the solutions include a horizontal asymptote y = 0. The error is the stupid instruction to draw the graph “from x = -3 to x = 3” in the first place since the intent was clearly to draw a graph of the function in part (a) on the given axes. If there’s a choice of wordy and ambiguous versus succint and clear, rely on VCAA to do the former.
Furthermore, the VCAA pedants are constantly pedanting about their global instruction “Unless otherwise stated, an exact answer is required to a question”. And yet, in part (b) nothing is “otherwise stated” but the Examination Report gives coordinates to 2 dp. So students have to assume that the answers found in part (a) are to be used in part (c).
Thanks, John. I’ve added the asymptote to the list above, with some discussion as to why it is an error.
Just to clarify the decimal point thing, I assume your point is that in general – with no instructions and ignoring part (a) – then in part (b) (not part (c)) the students would be expected to provide exact coordinates? It seems clear enough to me in this context to permit (I assume not demand) approximate coordinates in (b). So, are you really objecting to (b), or are you just having a whack at their general hypocritical nitpickery?
Thanks, Marty. Yes, I meant part (b).
Re: The decimal thing. It’s an objection AND a whack.
Objection: If the answers from part (a) are to be used in part (b), then it should say so in part (b): Either “Using your answers to part (a) …” or “Give all coordinates correct to two decimal places.” Because the VCAA instruction is very clear: “Unless otherwise specified, an exact answer is required to a question.”
Whack: VCAA are so sanctimonious about its instructions and yet are happy to ignore them (see also my post about ignoring accuracy in an answer at the Methods Errors blog). Where’s the instruction that says “Unless otherwise stated, the accuracy used in one part of a question must be used in later parts.” … ? Your comment that “It seems clear enough to me in this context to permit (I assume not demand) approximate coordinates in (b).” is far too generous. It’s not good enough. If VCAA want to live by the pedantry, they must die by the pedantry. And of course the Report makes no mention of students who gave exact answers in part (b) (I can’t believe for one moment that such students don’t exist) and whether or not they were penalised. We’ll never know.
It’s an error of omission (in the question) at best, and an error in the answer at worst (the kind of error VCAA loves to get on its high horse and mewl about). Pure VCAA hypocrisy.
And what do we see in the 2019 NHT Exam 2 Question 2 part (a) (iii) … “Sketch the graph of from to (endpoint coordinates are not required) …” And what answer do we see in the so-called ‘Report’:
i) A horizontal asymptote, AND
ii) a graph that extends past both sides of the specified interval.
Probably the same idiot writer doubling down on his/her stupidity.
Ok I guess it’s fair enough to add as an error, although it’s minor compared to having an asymptote to a function that stops at x = 3. I’ll add tomorrow, and will also look at the NHT question you just raised.
OK, in the last few days we’ve added a *lot* of errors to this list, some minor and more than a few major (in red). We took a very, very quick look at all the older exams, to see if anything caught our eye. We’re now done with this list, except of course for future errors, and except for corrections and past errors flagged by commenters/emailers.
I know that WitCH 67 is already a horrific question having read the article, but VCAA have once again somehow failed to perform basic arithmetic. Regarding question 10, they claim that squaring the LHS gives a value of -i*a on the very right of the LHS within the exam report for the exam, however upon doing it both by hand and by Mathematica, they appear to be missing the fact that is -i*sqrt(a), this while a small typo, may lead students to be extremely confused as to how they manipulated the equation they gave to the following line. While the rest of the working out is alright, this is probably detrimental to students understanding of the question to leave that typo in.
Nice spot, E.
Indeed, it *will* confuse students (and some teachers) as to how the first line was simplified to get the second line, since there is no cancellation of terms. Quality control is not VCAA’s strong suit.
If one gives a garbage solution to a garbage question, one should at least ensure that the garbage solution has no typos.
That’s very (black) funny. Thanks, E. I’ve added a note to the post.
Note that the rest of the working out is “alright”, but does not prove that a = 3 is a solution. WitCH 67 still applies.
Error in NHT Report for Exam 1 Q6 – the i-component of the double derivative of r is wrong (the power of t should be -3/2 not -1/2).
Thanks, John. Obviously just a cut and paste error, or whatever, but an error. Added.
Re: 2017 Exam 2 Q5 (d).
I’m probably blind. But can anyone see a reason why and corresponding time (approx 4.249 seconds) is not included in the answer? Different starting point, different time for collision …
I can only think that for reasons not given, only the position (value of a) that gives the smallest time for a collision to occur is accepted …?
Hi, John. Does your solution correspond to t being in the third quadrant? If so, I think the question indicating a collision “shortly after starting” is clear enough, although obviously the wording isn’t great.
Case 1: gives as the first time for collision.
Case 2: gives as the first time for collision.
The jetski starts in a different spot for each case. So I would have thought that this meant that the collision at would still be the first time of collision for that value of !
If VCAA only wanted the value of for the smaller of the two ‘first times’, then “the wording isn’t great” is a massive understatement!! I certainly don’t interpret the wording this way!
Predictably, no mention in the Examination Report of students who gave both cases as their answer and whether or not that was accepted.
Yeah I think you’re probably right. I’ll think about it more and then probably add it to the list.
Yes, I decided you’re correct, and that it is sufficiently wrong to be red. I’ve added to the entry above, and I’ll also add a comment to the relevant WitCH.
2022 NHT exam 1 10b
First problem, seems to be a minor typo, they have forgotten the squared outside of the (t^2-6) in the solutions. Furthermore, they then seem to skip 2-3 steps in one line. They do not justify their dropping of the modulus outside of the square root, simply giving the negative answer of the integral without justification. Now while this might seem something not major for a specialist maths student, it should still be mentioned that a rejection of the positive solution needed to have been stated.
Thanks, E. Yep, just a typo but an error. I’ll add. I also agree that the report should have indicated why the sign on was switched, although it’s pretty standard for NHT solutions to be bare bones. (I wouldn’t word it as “rejection of the positive solution”, but I know what you mean.)
2017 NHT exam 2 extended response 6d
The answer in the examination report stated 409870. While the question required to round to the nearest ten dollars, 409870 actually makes the p-value lower than 0.05, hence H0 would be rejected. Is 409860 more suitable in this case…?
Thanks, Vivian. I’ll check out tomorrow.
A more accurate answer for the critical value for rejection is 409,869.12. So now the question is do you round up or down?
VCAA’s rounding policy is inconsistent and can lead to confusion and angst. Do they want mathematical rounding or ‘real life’ rounding? Rounding up or rounding down?
In this case, if you round 409,869.12 down to 409,860 then is a bit bigger than 0.05 which means that you’d be incorrectly rejecting H0 for a sample mean of 409,860 (or, in fact, any sample mean between 409,860 and 409,869).
If you round 409,869.12 up to 409,870, not only are you correctly rounding mathematically (which is probably what VCAA wants you to do) but you will also be correctly rejecting H0 for a sample mean of 409,870 or greater.
So the answer given in the Examination Report is correct. There is no error.
If you’re unconvinced, you should check what decision you’d make using 409,860 as your critical value of rejection for observed sample means like 409,865 or 409,867 etc.
Or alternatively, calculate the probability of getting those sample means (you’ll find the probabilities are all greater than 0.05).
Sorry for cutting your lunch on this one, Marty. I know how much you enjoy diving into the stats stuff. (Second only to getting root canal surgery at the dentist).
More generally (and this question is a fine example), I always wonder why the writers use a perfect square for the sample size in Exam 2 (and it’s always a ‘small’ perfect square like 25). On Exam 1, sure – let’s keep the arithmetic simple. But on Exam 2 with all its button pushing …? It always strikes me as VCAA setting up yet another misconception.
Re 2022 Exam 2: “MCQ19 (added 10/11/22) – discussed here. The population mean is given rather than the required sample mean. The question can be done in some mechanical manner, but is fundamentally meaningless and pointless.”
Marty, only if you have the time and/or will, I’d like to see “… and conceptually flawed” added to the end of your last sentence here. I think stopping at “meaningless and pointless” misses the fact that the question is fundamentally defective.
(Almost VCAA specialist maths stats question can be done in some mechanical manner and is meaningless and pointless …)
Nah. I could have said more, but that’s enough. I don’t discuss the question above, and if people want the details, they have the link.
The 2017 exam 2 report was amended 7 Oct 2022. Comparing with the original report, I cannot discern what change(s) was made. None of the errors raised in this blog were corrected. VCAA probably made a much more important amendment, like replacing a comma with a full stop.
As an aside, it is very frustrating that amendments get made to reports , but there is no mention of what change(s) was made.
Thanks, John. The only change was a minor correction of the solution to 3(e): there was an extra 4 factor inside the brackets, which was removed. I agree: at least VCAA flags when a report has been amended but, unless entirely trivial, there should also be flags at the specific locations of the edits.
Thanks, Marty. Good to know that after 5 years the VCAA got around to fixing the most serious of all the errors on that report.
I’ve added the errors from the sample exam questions for the new study design.