This one comes from frequent commenter Red Five, and we apologise for the huge delay in posting. It is targeted at those familiar with and, more likely, struggling with Victoria’s VCE rituals:

VCAA uses some pretty strange words in exam questions, and the more exam papers I read, especially for Specialist Mathematics 34, the more I can’t get a firm idea of how they distinguish between the meanings of “show that“, “verify that” and “prove that“.

“Verify” seems to mean “by substitution”, “show that” seems to mean “given these very specific parameters” and “prove that” seems to be more general, but is it really this simple?

This one comes courtesy of a smart VCE student, the issue having been flagged to them by a fellow student. It is a multiple choice question from the 2009 Mathematical Methods, Exam 2; the Examination Report indicates, without comment, that the correct answer is D.

It seems that what amounts to VCE exam marking schemes may be available for purchase through the Mathematical Association of Victoria. This seems very strange, and we’re not really sure what is going on, but we shall give our current sense of it. (It should be noted at the outset that we are no fan of the MAV in its current form, nor of the VCAA in any form: though we are trying hard here to be straightly factual, our distaste for these organisations should be kept in mind.)

Each year, the MAV sells VCE exam solutions for the previous year’s exams. It is our understanding that it is now the MAV’s strong preference that these solutions will be written by VCAA assessors. Further, the MAV is now advertising that these solutions are “including marking allocations“. We assume that the writers are paid by the MAV for this work, and we assume that the MAV are profiting from the selling of the product, which is not cheap. Moreover, the MAV also hosts Meet the Assessors events which, again, are not cheap and are less cheap for non-members of the MAV. Again, it is reasonable to assume that the assessors and/or the MAV profit from these events.

We do not understand any of this. One would think that simple equity requires that any official information regarding VCE exams and solutions should be freely available. What we understand to be so available are very brief solutions as part of VCAA’s examiners’ reports, and that’s it. In particular, it is our understanding that VCAA marking schemes have been closely guarded secrets. If the VCAA is loosening up on that, then that’s great. If, however, VCAA assessors and/or the MAV are profiting from such otherwise unavailable information, we do not understand why anyone should regard that as acceptable. If, on the other hand, the MAV and/or the assessors are not so profiting, we do not understand the product and the access that the MAV is offering for sale.

We have written previously of the worrying relationship between the VCAA and the MAV, and there is plenty more to write. On more than one occasion the MAV has censored valid criticism of the VCAA, conduct which makes it difficult to view the MAV as a strong or objective or independent voice for Victorian maths teachers. The current, seemingly very cosy relationship over exam solutions, would only appear to make matters worse. When the VCAA stuffs up an exam question, as they do on a depressingly regular basis, why should anyone trust the MAV solutions to provide an honest summary or evaluation of that stuff up?

Again, we are not sure what is happening here. We shall do our best to find out, and commenters, who may have a better sense of MAV and VCAA workings, may comment (carefully) below.

UPDATE (13/02/20)

As John Friend has indicated in his comment, the “marking allocations” appears to be nothing but the trivial annotation of solutions with the allotted marks, not a break-down of what is required to achieve those marks. So, simply a matter of the MAV over-puffing their product. As for the appropriateness of the MAV being able to charge to “meet” VCAA assessors, and for solutions produced by assessors, those issues remain open.

We’ve also had a chance to look at the MAV 2019 Specialist solutions (not courtesy of JF, for those who like to guess such things.) More pertinent would be the Methods solutions (because of this, this, this and, especially, this.) Still, the Specialist solutions were interesting to read (quickly), and some comments are in order. In general, we thought the solutions were pretty good: well laid out with usually, though not always, the seemingly best approach indicated. There were a few important theoretical errors (see below), although not errors that affected the specific solutions. The main general and practical shortcoming is the lack of diagrams for certain questions, which would have made those solutions significantly clearer and, for the same reason, should be encouraged as standard practice.

For the benefit of those with access to the Specialist solutions (and possibly minor benefit to others), the following are brief comments on the solutions to particular questions (with section B of Exam 2 still to come); feel free to ask for elaboration in the comments. The exams are here and here.

Exam 1

Q5. There is a Magritte element to the solution and, presumably, the question.

Q6. The stated definition of linear dependence is simply wrong. The problem is much more easily done using a 3 x 3 determinant.

Q7. Part (a) is poorly set out and employs a generally invalid relationship between Arg and arctan. Parts (c) and (d) are very poorly set out, not relying upon the much clearer geometry.

Q8. A diagram, even if generic, is always helpful for volumes of revolution.

Q9. The solution to part (b) is correct, but there is an incorrect reference to the forces on the mass, rather than the ring. The expression “… the tension T is the same on both sides …” is hopelessly confused.

Q10. The question is stupid, but the solutions are probably as good as one can do.

Exam 2 (Section A)

MCQ5. The answer is clear, and much more easily obtained, from a rough diagram.

MCQ6. The formula Arg(a/b) = Arg(a) – Arg(b) is used, which is not in general true.

MCQ11. A very easy question for which two very long and poorly expressed solutions are given.

MCQ12. An (always) poor choice of formula for the vector resolute leads to a solution that is longer and significantly more prone to error. (UPDATE 14/2: For more on this question, go here.)

MCQ13. A diagram is mandatory, and the cosine rule alternative should be mentioned.

MCQ14. It is easier to first solve for the acceleration, by treating the system as a whole.

MCQ19. A slow, pointless use of CAS to check (not solve) the solution of simultaneous equations.

Q1. In Part (a), the graphs are pointless, or at least a distant second choice; the choice of root is trivial, since y = tan(t) > 0. For part (b), the factorisation should be noted. In part (c), it is preferable to begin with the chain rule in the form , since no inverses are then required. Part (d) is one of those annoyingly vague VCE questions, where it is impossible to know how much computation is required for full marks; the solutions include a couple of simplifications after the definite integral is established, but God knows whether these extra steps are required.

Q2. The solution to Part (c) is very poorly written. The question is (pointlessly) difficult, which means clear signposts are required in the solution; the key point is that the zeroes of the polynomial will be symmetric around (-1,0), the centre of the circle from part (b). The output of the quadratic formula is neccessarily a mess, and may be real or imaginary, but is manipulated in a clumsy manner. In particular, a factor of -1 is needlessly taken out of the root, and the expression “we expect” is used in a manner that makes no sense. The solution to the (appallingly written) Part (d) is ok, though the centre of the circle is clear just from symmetry, and we have no idea what “ve(z)” means.

Q3. There is an aspect to the solution of this question that is so bad, we’ll make it a separate post. (So, hold your fire.)

Q4. Part (a) is much easier than the notation-filled solution makes it appear.

Q5. Part (c)(i) is weird. It is a 1-point question, and so presumably just writing down the intuitive answer, as is done in the solutions, is what was expected and is perhaps reasonable. But the intuitive answer is not that intuitive, and an easy argument from considering the system as a whole (see MCQ14) seems (mathematically) preferable. For Part (c)(ii), it is more straight-forward to consider the system as a whole, making the tension redundant (see MCQ14). The first (and less preferable) solution to Part (d) is very confusing, because the two stages of computation required are not clearly separated.

Q6. It’s statistical inference: we just can’t get ourselves to care.

UPDATE (26/06/20)

The Specialist Maths examination reports are finally, finally out (here and here), so it seems worth revisiting the MAV “Assessor” solutions. In summary, the clumsiness of and errors in the MAV solutions as indicated above (and see also here and here) do not appear in the reports; in the main this is because the reports are pretty much silent on any aspect involving some subtlety. Sigh.

Some specific comments:

EXAM 1

Q5 Yes, Magritte-ish. Justifying that the critical points are extrema was not expected, meaning conscientious students wasted their time.

Q6 The error in the MAV solutions is ducked in the report.

Q7 The error in the MAV solutions is ducked in the report.

EXAM 2 (Section A)

MCQ6The error in the MAV solutions is ducked in the report.

MCQ11 The report is silent.

MCQ12 A huge screw-up of a question, to which the report hemidemisemi confesses: see here.

MCQ14 The report suggests the better method for solving this problem.

EXAM 2 (Section B)

Q2 Jesus. This question was intrinsically confusing and very badly worded, with the students inevitably doing poorly. So, why the hell is the examination report almost completely silent? The MAV solutions were a mess, but the absence of comment in the report is disgraceful.

Q3 The solution in the report is ok, although more could have been written. But, it’s not the garbled nonsense of the MAV solution, as detailed here.

This one comes courtesy of Christian, an occasional commenter and professional nitpicker (for which we are very grateful). It is a question from a 2016 Abitur (final year) exam for the German state of Hesse. (We know little of how the Abitur system works, and how this question may fit in. In particular, it is not clear whether the question above is a statewide exam question, or whether it is more localised.)

Christian has translated the question as follows:

A specialty store conducts an ad campaign for a particular smartphone. The daily sales numbers are approximately described by the function g with , where t denotes the time in days counted from the beginning of the campaign, and g(t) is the number of sold smartphones per day. Compute the point in time when the most smartphones (per day) are sold, and determine the approximate number of sold devices on that day.

We have a short Specialist post coming, and we’ll have more to write on the 2019 VCE exams once they’re online. But, for now, one more Mathematical Methods WitCH, from the 2019 (calculator-free) Exam 1:

Update (04/07/20)

The main crap here, of course, is part (f): as commenter John Friend puts it, what the hell is this question supposed to be testing? And, sure, the last part of the last question on an exam is allowed to be a little special, but one measly mark? Compared to the triviality of the rest of the question?

Of course, students bombed part (f). The examination report indicates that 19% of student correctly answered that there is one solution to the equation; as suggested by commenter Red Five, it’s also a pretty safe bet that the majority of students who got there did so with a Hail Mary guess. (It should be added, the students didn’t do swimmingly well on the rest of Question 9, the CAS-lobotomising having working its usual magic.)

OK, so what did examiners expect for that one measly mark? We’ll get to a reasonable solution below, but let’s first consider some unreasonable solutions.

This question was not well done. Few students attempted to draw a rough sketch of each equation and use addition of ordinates.

Gee, thanks. Drawing a “rough sketch” of either of these compositions is anything but trivial. For one measly mark. We’ll look at sketching aspects of these graphs below, but let’s get on with another unreasonable solution.

Given the weirdness of part (f), a student might hope that parts (a)-(e) provide some guidance. Let’s see.

Part (b) (for which the examination report contains an error), gets us to conclude that the composition

has negative derivative when x > 1.

Part (c) leads us to the composition

having x-intercept when x = log(3).

Finally, Part (e) gives us that the composition f(g(x)) has the sole stationary point (0,4). How does this information help us with Part (f)? Bugger all.

So, what if we include the natural implications of our previous work? That gives us something like the following: Well, um, great. We’re left still hunting for that one measly mark.

OK, the other parts of the question are of little help, and the examiners are of no help, so what do else do we need? There are two further pieces of information we require (plus the Intermediate Value Theorem). First, note that

Secondly, note that

if x is huge.

Then, given we know the slopes of the compositions, we can finally complete our rough sketches: Now, let’s write S(x) for our sum functiong(f(x)) + f(g(x)). We know S(x) > 0 unless one of our compositions is negative. So, the only place we could get S = 0 is if x > log(3). But S(log(3)) > 0, and eventually S is hugely negative. That means S must cross the x-axis (by IVT). But, since S is decreasing for x > 1, S can only cross the axis once, and S = 0 must have exactly one solution.

Tons of nonsense to post on, and the Evil Mathologer is breathing down our neck. We’ll have (at least) three posts on last week’s Mathematical Methods exams. This one is by no means the worst to come, but it fits in with our previous WitCH, so let’s quickly get it going. It is from Exam 1. (No link yet, but the Study Design is here.)

Update (15/06/20)

The examination report (and exam) is out, so it’s time to wade into this swamp. Before doing so, we’ll note the number of students who sank; according to the examination report, the average score on this question was 0.14 + 0.09 + 0.14 ≈ 0.4 marks out of 4. Justified or not, students had absolutely no clue what to do. Now, into the swamp.

The main wrongness is in Part (b), but we’ll begin at the beginning: the very first sentence of Part (a) is a mess. Who on Earth writes

“The function is a polynomial function …”?

It’s like writing

“The Prime Minister Scott Morrison of Australia, Scott Morrison is a crap Prime Minister”.

Yes, you may properly want to emphasise that Scott Morrison is the Prime Minister of Australia, and he is crap, but that’s not the way to do it. This is nitpicking, of course, but there are two reasons to do so. The first reason is there is no reason not to: why forgive the gratuitously muddled wording of the very first sentence of an exam question? From these guys? Forget it. The second reason is that the only possible excuse for this ridiculous wording is to emphasise that the domain of is all of , which turns out to be entirely pointless.

Now, to Part (a) proper. This may come as a surprise to the VCAA overlords, but functions do not have “rules”, at least not unique ones.The functions and , for example, are the exact same function. Yes, this is annoying, but we’re sorry, that’s the, um, rule. Again this is nitpicking and, again, we have no sympathy for the overlords. If they insist that a function should be regarded as a suitable set of ordered pairs then they have to live with that choice. Yes, eventually ordered pairs are the precise and useful way to define functions, but in school it’s pretty much just a pedantic pain in the ass.

To be fair, we’re not convinced that the clumsiness in the wording of Part (a) contributed significantly to students doing poorly. That is presumably much more do to with the corruption of students’ arithmetic and algebraic skills, the inevitable consequence of VCAA and ACARA calculatoring the curriculum to death.

On to Part (b), where, having found or whatever, we’re told that is “a function with the same rule as ”. This is ridiculous and meaningless. It is ridiculous because we never did anything with in the first place, and so it would have been a hell of lot clearer to have simply begun the damn question with on some unknown domain . It is meaningless because we cannot determine anything about the domain from the information provided. The point is, in VCE the composition is either defined (if the range is wholly contained in the positive reals), or it isn’t (otherwise). End of story.Which means that in VCE the concept of “maximal domain” makes no sense for a composition. Which means Part (b) makes no sense whatsoever. Yes, this is annoying, but we’re sorry, that’s the, um, rule.

Finally, to Part (c). Taking (b) as intended rather than written, Part (c) is ok, just some who-really-cares domain trickery.

In summary, the question is attempting and failing to test little more than a pedantic attention to boring detail, a test that the examiners themselves are demonstrably incapable of passing.

The following WitCH is pretty old, but it came up in a tutorial yesterday, so what the Hell. (It’s also a good warm-up for another WitCH, to appear in the next day or so.) It comes from the 2011 Mathematical Methods Exam 1:

For part (a), the Examination Report indicates that f(g)(x) =√([x+2][x+8]), leading to c = 2 and d = 8, or vice versa. The Report indicates that three quarters of students scored 2/2, “However, many [students] did not state a value for c and d”.

For Part (b), the Report indicates that 84% of students scored 0/2. After indicating the intended answer, (-∞,-8) U (-2,∞) (-∞,-8] U [-2,∞) or R(-8,-2), the Report goes on to comment:

“This question was very poorly done. Common incorrect responses included [-3,3] (the domain of f(x); x ≥ -2 (as the ‘intersection’ of x ≥ -8 with x ≥ -2); or x ≥ -8 (as the ‘union’ of x ≥ -8 with x ≥ -2). Those who attempted to use the properties of composite functions tended to get confused. Students needed to look for a domain that would make the square root function work.”

The Report does not indicate how students got “confused”, although the composition of functions is briefly discussed in the Study Design (page 72).

Our second (and last for now) NHT WitCH is due to the ever-vigilant John the Merciless (who shall, to begin, hold his fire …). It comes from the 2019 Exam 1 of Specialist Mathematics (calculator-free):

The examination “report” gives the answers as: (a) (51,65); (b) 0.02, 0.03 accepted.

We’ve finally found some time to take a look at VCAA’s 2019 NHT exams. They’re generally bad in the predictable ways, and they include some specific and seemingly now standard weirdness that we’ll try to address soon in a more systematic manner. WitCHwise, we were tempted by a number of questions, but we’ve decided to keep it to two or three.

Our first NHT WitCH is from the final question on Exam 2 (CAS) of Mathematical Methods:

As usual, the NHT “Report” indicates nothing of how students went, and little of what was expected. In regard to part f, the Report writes,

p(x) = q(x) = x, p'(x) = q'(x) = 1, k = 1/e

For part g, all that the Report provides is the answer, k = 1.

The VCAA also provides sample Mathematica solutions to schools trialling Methods CBE. For the questions above, these solutions are as follows:

Our second WitCH of the day also comes from the 2017 VCE Specialist Mathematics Exam 2. (Clearly an impressive exam, and we haven’t even gotten to the bit about using inverse trig functions to design a brooch.) It is courtesy of the mysterious SRK, who raised it in the discussion of an earlier WitCH.

Question 5 of Section B of the (CAS) exam concerns a boat and a jet ski. Though SRK was concerned with one particular aspect, the entire question is worth pondering:

The Examiner’s Report indicates an average student score of 1.4 on part a, and comments,

Students plotted the initial positions correctly but significant numbers of students did not label the direction of motion or clearly identify the jet ski and the boat. Both requirements were explicitly stated in the question.

For part i, the Report indicates an average score of 1.3, and comments,

Most students found correct expressions for velocity vectors. The most common error was to equate these velocity vectors rather than equating speeds.

For part ii, the Report gives the intended answer as (3,3). The Report indicates that slightly under half of students were awarded the mark, and comments,

Some answers were not given in coordinate form.

For part i, the Report suggests the answer (with the displayed answer adorned by a weird, extra root sign). The report indicates that a little over half of the students were awarded the mark, and comments,

A variety of correct forms was given by students; many of these were likely produced by CAS technology, including expressions involving double angles. Students should take care when transcribing expressions from technology output as errors frequently occur, particularly regarding the number and placement of brackets. Some incorrect answers retained vectors in the expression.

For Part ii, the Report indicates the intended answer of 0.33, and that 15% of students were awarded the mark for this question. The Report comments,

Many students found this question difficult. Incorrect answers involving other locally minimum values were frequent.

The Report indicates an average score of 1.3 on part d, and comments;

Most students correctly equated the vector components and solved for t . Many went on to give decimal approximations rather than supplying the exact forms. Students are reminded of the instruction saying that an exact answer is required unless otherwise specified.