We’re not really ready to embark upon this post, but it seems best to get it underway ASAP, and have commenters begin making suggestions.

It seems worthwhile to have *all* the Mathematical Methods exam errors collected in one place: this is to be the place.*

Our plan is to update this post as commenters point out the exam errors, and so slowly (or quickly) we will compile a comprehensive list.

To be as clear as possible, by “error”, we mean a definite mistake, something more directly wrong than pointlessness or poor wording or stupid modelling. The mistake can be intrinsic to the question, or in the solution as indicated in the examination report; examples of the latter could include an insufficient or incomplete solution, or a solution that goes beyond the curriculum. Minor errors are still errors and will be listed.

With each error, we shall also indicate whether the error is (in our opinion) **major** or minor, and we’ll indicate whether the examination report acknowledges the error, updating as appropriate. Of course there will be judgment calls, and we’re the boss. But, we’ll happily argue the tosses in the comments.

Get to work!

*) Yes, there are also homes for Specialist Mathematics and Further Mathematics errors.

******************************

**2016 EXAM 2 (Here, and report here)**

**Q3(h), Section B (added 06/10/20)- discussed here. **This is the error that convinced us to start this blog. The question concerns a “probability density function”, but with integral unequal to 1. As a consequence, the requested “mean” (part (i)) and “median” (part (ii)) make no definite sense.

There are three natural approaches to defining the “median” for part (ii), leading to three different answers to the requested two decimal places. Initially, the examination report acknowledged the issue, while weasely avoiding direct admission of the fundamental screw-up; answers to the nearest integer were accepted. A subsequent amendment, made over two years later, made the report slightly more honest, although the term “screw-up” still does not appear.

As noted in the comment and update to this post, the “mean” in part (i) is most naturally defined in a manner different to that suggested in the examination report, leading to a different answer. The examination report still fails to acknowledge any issue with part (i).

**Q4(c), Section B (added 25/09/20) **The solution in the examination report sets up (but doesn’t use) the equation dy/dx = stuff = 0, instead of the correct d/dx(stuff) = 0.

**2016 EXAM 1 (Here, and report here)**

**Q5(b)(i) (added 24/09/20) **The solution in the examination report gives the incorrect expression in the working, rather than the correct .

**2014 EXAM 2 (Here, report here)**

**MCQ4 (added 21/09/20) – discussed here. **The described function need not satisfy any of the suggested conditions, as discussed here. The underlying issue is the notion of “inflection point”, which was (and is) undefined in the syllabus material. The examination report ignores the issue.

**2011 EXAM 2 (Here, and report here)**

**Q4, Section 2 (added 23/09/20) **The vertex of the parabola is incorrectly labelled (-1,0), instead of (0,-1). The error is not acknowledged in the examination report.

**2011 EXAM 1 (Here, and report here)**

**Q7(b) (added 23/09/20) **The question asks students to “find *p*“, where is the probability that a biased coin comes up heads, and where it turns out that . The question is fatally ambiguous, since there is no definitive answer to whether is possible for a “biased coin”.

The examination report answer includes both values of , while also noting *“The cancelling out of p was rarely supported; many students incorrectly *[sic]* assumed that p could not be 0.” *The implication, but not the certainty, is that although 0 was intended as a correct answer, students who left out or excluded 0 could receive full marks IF they explicitly “supported” this exclusion.

This is an archetypal example of the examiners stuffing up, refusing to acknowledge their stuff up, and refusing to attempt any proper repair of their stuff up. Entirely unprofessional and utterly disgraceful.

**2010 EXAM 2 (Here, report here)**

**MCQ17 (added 28/09/20) – discussed here. **Same as in the 2014 Exam 2, above: the described function need not satisfy any of the suggested conditions, as discussed here.

**2007 EXAM 2 (Here, report here and discussed here)**

**MCQ12 (added 26/09/20) **Same as in the 2014 Exam 2, above: the described function need not satisfy any of the suggested conditions, as discussed here.

One example I think could be part of the inflexion point shennanigans would be MM Exam 2 2014 MCQ 4 which has a function f with no other conditions than being continuous. The problem is that you could construction a piecewise defined function, e.g if and if . Differentiating gives which satisfies the conditions in the question and then one last differentiation gives us that there is a change in concavity, but not for any x, most importantly f”(5).

Thanks, Sai, that question is definitely screwed, as discussed here, and was on my radar to include. I’m not quite sure, however, how your example contradicts the question.

Ah phooey, I messed up the function in formatting… so much for a response early in the morning. Trying this again with (with any choice for the antiderivative f) will yield that if and obviously, definitely not being equal to 0. I also found another instance in which they fucked it up, 2007 MM exam 2 (CAS and non CAS) Q12, which is the same problem, different numbers. There are no comments either on the assessors report… I would also like to note out the itute solutions posted on either occasions don’t make any comment. Make what you will of this…

Edit:

No clue what’s going on with the LaTeX, my point is that will not give you a point of inflexion on f.

Hi, Sai. I tidied up your comments. (I’ve been changing settings, to deal with some spam issues, which may be confusing you and others, including me.) I added the 2007 exam as a link, because it seemed to want to be displayed, but failed. I’ll check out that exam now.

I still don’t quite understand your example. (Doesn’t your change sign, as you want?) The underlying question here is the meaning of “inflection point” at some . The standard, but not universal, notion is “change of concavity” at . That’s not quite a definition, since it leaves open how differentiable the function need be, particularly at . The 2014 exam is implicitly using a different notion.

Ah I’ve unintentionally mixed in two different ideas. One was that you don’t need to have for a point of inflexion, only a change in concavity in some neighborhood and the other, which is that you could have but not necessarily a point of inflexion. The example I posted was there to suggest that you could have a change in concavity without as VCAA seems to think, although I’m not sure if Methods explicitly defines “stationary points of inflexion”.

Hi, Sai. Hardly your fault. I don’t think “point of inflection” (stationary or otherwise) is precisely defined anywhere in the VCE material. In general, people don’t require f to be differentiable at to have an inflection point at , and so is not (usually) necessary. Also, the simple example shows that is not sufficient. The much trickier thing is to show that the conditions of the 2014 MCQ aren’t sufficient for an “inflection point” in any change of concavity sense. That is the purpose of the example Burkard and I give in our critique of the exam.

Thanks, Sai. I’ll tidy your comment soon. For latex you put the dollar signs, and right after the first dollar sign you type the word latex.

2011 Math Methods Exam 1, Q7b

The examiner report shows that both p = 0 and p = 3/4 should be retained as valid answers.

Even though we all know any probabilities can take values within [0, 1], it wouldn’t be a wise idea to say a tangible coin can NEVER have face up (tail). Neither does the examiner report provide a rigorous mathematical explanation that makes sense to most of the teachers, nor does it explain how those students were awarded or not – who rejected p = 0 at the end. In fact, looks like many students who cancelled p^2 at the first place were penalized, and it was expected on students – expanding both probabilities, rearranging and use null factor law to get “both solutions”.

Same year – 2011, paper 2: ERQ4, the point (-1, 0) was labeled, instead of (0, -1), though it should not have impeded students from getting some work done (seemingly, but who knows?)

Thanks, AIPNH. Every error is an error. I’ll check them out.

Re: MM 2011 Exam 1 7b. I’d have to work through the question thoroughly, but my initial reaction is that p = 0 is a reasonable solution, since that would represent one of the two extreme cases of a biased coin: a two-tailed coin.

The point about cancelling p^2 is interesting (to me). I have taught my students that it would be fine to divide by p^2, but this assumes that p^2 ≠ 0, in which case one should also consider the equation p^2 = 0 and what solutions (if any) that has. This seems just as valid an approach as moving all the unknowns to one side with 0 on the other, then factoring, etc.

SRK, “reasonable” doesn’t cut it here.

SRK，

You are right. However, cancellation of common terms will definitely get them penalised for some questions in spesh (in particular certain scenarios in exam 1s).

Examples includes:

– 2012 SM 1 Q2 (3 marks) If student cancelled cos(x) both sides after using compound angle formula, award maximum 1 out of 3 marks

– 2017 NHT SM1 Q6 (3 marks) If student cancelled tan(x), also maximum 1 out of 3 marks

– 2019 SM1 Q4 (3 marks) If student presented any evidence of cancellation of “t” both sides when they equate the x components such as drawing a slash on “t”s, then deduct 1 mark, whether it be right final answer or not.

Even though the marking procedures vary from time to time, I still believe it is safer for the kids to perform the following procedure in any Methods or Spesh exams:

1. Expand both sides.

2. Don’t do any cancellations. Rearrange everything to one side, making RHS=0

3. Take out any common factor and factorise “properly”.

4. State all solutions from above, and see if there is any solutions to be rejected. If any needs rejection, state the reason.

That’s what I really emphasize with my students every year, in the hope they don’t lose any extra mark and play safer..

P.N. I’m aware of the 2019 question you mentioned; I raised this issue at the Meet the Assessors earlier this year, and the response I got was the one you just gave. I wasn’t persuaded then, and I’m still not persuaded.

Consider . The method you recommend would be .

Whereas I would just write . There is no risk of “losing” solutions.

Yes SRK，

Honestly I was not persuaded too (when I was informed by someone in Nov last year)

I feel your second solution is also great. When term 4 starts I will show my students your method.

Some pelnaties in spesh or methods marking are not well known. And it varies from year to year. However, common penalties come from pedantry.

SRK and P.N., I’m too busy to chase down such leads right now. But if there is clear evidence in the examination reports of such obviously valid methods being penalised, please give the precise references in the comments here and I will check them out.

Marty,

Attached is the comments for Q4.

Hi, P.N. Sorry for being obtuse but I don’t see how this connects to SRK’s valid (but declared invalid?) method of solving .

Of course it is valid to “cancel out” a t or a cos x or whatever, as long you consider the possibility of the cancelled term being 0. Is there any direct evidence from the examination report and/or assessor solutions that VCAA considers otherwise?

2007 MM (NO CAS) Exam 2

Question 2 Tasmania – Insects being deadly.

The very last part was intended to test whether students could use the graph of trig function to find the points of intersection. However, the ambiguity of “insects being deadly” is definitely NOT a good example of authentic mathematical modelling, and it is really the pain in the arse – could our students create some magics from the air – to link the concentration level with insects being deadly – and then determine the safe period(s) of total time for comparison? I will say this question was excruciating and notoriously absurd.

Thanks, P.N. I’ll look at it, although it sounds more stupid than wrong.

Jesus. That is monumentally stupid. But, I don’t think it qualifies as “wrong”.

PINOF, you also refer to the question as “NO CAS”. Is the non-CAS version, on this question or in general, different from the CAS version, and is that the no-CAS version available?

Here you go marty, the link to the Non-CAS Maths Methods Exam 2 from 2007 (https://www.vcaa.vic.edu.au/Documents/exams/mathematics/2007mm2.pdf). This is slightly different to the CAS version of the same exam (https://www.vcaa.vic.edu.au/Documents/exams/mathematics/2007mmCAS2.pdf). The only difference I can see in Question 2 is the marks allocated to part f (i.). In the Non-CAS version it is 3 marks whereas in the CAS version it’s 2. Presumably due to CAS cutting out some portion of the working?

Thanks, Steve. Is there a page with links for these non-CAS exams?

Marty,

Unfortunately these old collection pages are gone, following the big update of VCAA exams last year.

Luckily I know where you can still access them.

These Mathematical Methods (No CAS) papers can be accessed somewhere else. I will email you the link.

Unfortunately not marty. I only found that link by using the Google terms “2007 MATHEMATICAL METHODS Written examination 2” and looked carefully at the URLs in the search results – one was 2007mm2.pdf and the other 2007mmCAS2.pdf and that was enough to get it.

Not sure where you could head to, to find the page with these exams (in the same way you can find the current exams on the VCAA site). As VM has said below, there may be one but I’m not aware of the link personally.

No worries, Steve. There are ways and there are ways.

2012 MM2 Q4b exam report：

This Tasmania question is intended to ask students “showing that” the tank will be empty when after 20 minutes.

Great question but poor exemplar answer on the report. This suggested approach is “verify”，not “show” (which is also highly relevant to one recent discussion “verification code”)

In my opinion, despite being a one mark question, the proper way is to set h(t) = 0, write a fully factorised equation in terms of t, derive two t values and then state why the negative t is rejected as t>0, thus concluding that the tank is empty at t=20.

Huh. Sauce for the goose.

NLP, the question is stupid rather than great, and the answer is problematic in a directly relevant manner to the MitPY verify/show/prove discussion. But the issue here is whether the question+solution contains an error. I don’t think it does.

Plugging in to “show that the tank is empty when t = 20” is, at least on Planet Earth, valid and sensible. The fact that the VCAA on a singular occasion demonstrated a glimmer of common sense doesn’t make their sense then an error.

My contribution to the MELting Pot stems from the post Bernoulli Trials and Tribulations and some of the comments that ensued, specifically those by JF, for those following along at home.

However, my contribution pertains to commercial, third-party trial exams, specifically MAV, and I’m not sure on the copyright implications of discussing and/or posting screenshots of the offending questions. In that vein, I’ll simply mention the exam’s year and question number, and those with access to the exam can add their responses accordingly.

Both exams are MAV Trial Exam 1s – the first is the 2020 exam and the second is the 2011 exam.

In the MAV 2020 MM Trial Exam 1, Question 3 involves a definite integral of the form , where is *quadratic* rather than the usual *linear* that is usually seen in Methods.

In the MAV 2011 MM Trial Exam 1, Question 1(b) involves an “integration by recognition” type question – where part (i) involves the derivative of the sum of an exponential and linear term, and part (ii) involves, as before, the antiderivative of the form , where is the same expression in part (i).

Fire away, everyone.

Everyone, please hold your fire.

Thanks, Steve. As it happens, I am just writing a post now on the 2020 Methods 1 trial exam. The focus is different, but I’ll mention that question and it’ll be the natural place to comment on that issue.

In general I’ll keep the error posts for the formal VCAA exams. But, it’s hard to know what to do with the MAV twilight zone.

Hi Marty, here’s a suggestion for what to do with the MAV twilight zone:

It’s common knowledge that the MAV has an unhealthy cosy relationship with the VCAA.

It follows that impressionable (for many reasons) teachers will see MAV trial exams as reflecting some sort of special VCAA-insider knowledge or special/subtle interpretation of the Study Design (VCAA sanctioned “natural connections”).

As a consequence it follows that many such teachers will get bad-influenced and stressed by MAV trial exam bullshit such as dodgy questions and dodgy solutions to dodgy style questions.

For this reason I propose that MAV trial exam questions and solutions be treated the same way (or perhaps be given their own similar blog) as VCAA questions.

Obviously there are many commercial companies writing all sorts of bullshit but none of it has quite the ‘special status’ that MAV bullshit has.

Thanks, JF. That pretty much captures it. I don’t think one can treat MAV (or any) trial exams the same way as offical VCAA exams: screw ups in the latter directly cause problems. But MAV-VCAA is like a really bad TV crossover episode, and MAV products must accordingly be considered to have, at minimum, a heavy VCAA tinge to them.

Another example. The pedant in me (all VCAA-induced pedantry I’ll add) picked up on this immediately.

VCAA 2016 Exam 1, Question 5(b)(i) – the Assessor’s Report has wrong working (on the surface it could be construed as a minor typo, however if a student/teacher were to present this working in an actual exam, it would be crucified until the cows come home).

Here’s the question, with the offending “solution” following:

Notice the “not big enough square roots” in the second-last and third-last working steps.

Have a field day, ladies and gentlemen (and marty).

First, lets deal with marty being placed in the “other” category …

Steve, yes of course it’s just a silly typo or TeX error or whatnot. But it doesn’t matter. It’s wrong.

Just a bit of lighthearted humour there Marty – no offence intended at all.

But yes, given that these assessor’s reports by their very nature must be vetted by multiple assessors, seeing a stupid error like this flow through (and not corrected 4 years later – I imagine they’ve been emailed about it querying this?) is not on. When you’re an organisation whose (sole) focus is to pick on small errors and so forth, and then you go and deal out this crap, it doesn’t bode well for everyone confiding in you.

Anyway rant over, I could go on all day and night.

Marty, looking forward to your next post on the trial exam.

As was my response.

And you’re exactly right. OK, having a typo on the solutions is no big deal, and doesn’t compare to their exam screw-ups. But it *should* have been caught, and it definitely should have been corrected by now.

Steve,

Following yours, another must be picked from the same year 2016, but in MM2 reports.

Thanks, PN. Up it goes.

Something that I found when I was practicing for Methods was a rather annoying NHT question (2018, MCQ 18). I don’t believe it completely is an error, but the premise is that you’re given 5 transformations along withe the start and end graph. You’re also told that the two graphs have the same scale. The ridiculousness is that if you were to apply transformation C or D, you would obtain identical graphs, although one is dilated in comparison to the other, at which point you’d need to carefully distinguish which one is which.

Jesus H. Christ. No, it’s not an error, but “fucking dumb” doesn’t begin to cover it either. Who thinks up such a question? Who signs off on it?

(ps Sai, I did a little edit of your comment, to make the exam a link. The PDF link seemed to want to display but fail.)

We better not forget the dodgy ‘pdf’ on 2016 Exam 2 Q3 (h). If errors were an Olympic event, this one would be on the podium.

Of course that one is on the radar. At this stage I’m just posting errors as people suggest/remind me of them, but a bunch of the exam WitCHes, and a bunch of others will eventually get posted.

John and Marty，

“Lest we forget”.

Lest we forgive.