Did VCAA Misgrade the 2022 Exams and, if so, Why?

This will be my last VCAA post for a while. I’m sick of it all. And, anyway, it’s now a matter of waiting to see what the Minister and the newly acting CEO of VCAA will do. I think both of them have demonstrated some genuine interest in repairing the mathematics exams, and I think both of them deserve an honest shot at it, without a nasty blogger sniping at them. But, there’s one last post I think must be written.

One of the lingering puzzles of VCAA’s Deloitte Debacle is why VCAA were so determined to game their mathematics exam reviews so much. VCAA were clearly determined beyond all plausibility to ensure that Deloitte examined nothing beyond trivialities, and it seems VCAA didn’t let a proper mathematician get within a country mile of the exam questions. But why? Sure, the 2022 exams were, as always, bad, with, as always, bad errors, and any halfway honest review would have been plenty embarrassing. But bureaucratic deceit can only hide so much, and not nearly as much as VCAA had to hide. So now, rather than simply looking like a gang of clowns, VCAA looks like a gang of deceitful clowns. 

I’ve pondered this, and I now think this question has a simple answer. I think the reason is because VCAA misgraded the 2022 exams. Moreover, I think in significant part VCAA knowingly misgraded the 2022 exams: they were fully aware at the time of grading of at least four of the five major errors; they then made decisions about the treatment of these errors that, at minimum, were much less generous than they might have been. If that is the case then the exam reports’ deafening silence on these errors, followed by VCAA’s mega-gaming of their reviews, is more understandable as a simple case of digging in, and then digging in further.

Of course this simple answer, if correct, raises a second question: why did VCAA decide to treat the five errors in such a manner? I think this is more complicated, with differing answers for each error. So, what I will do now is go through each of the five major errors, considering to what extent the exam report, and other evidence, suggests the affected exam question was misgraded, and then speculating why. I’ll consider the questions in increasing order of difficulty of knowing and explaining VCAA’s conduct.


Methods Exam 2, QB4(e)(ii)

I think this is the easiest one to explain: I think VCAA was unaware of the error during grading and may well still be unable to see the problem. To recap, two functions are defined in terms of k > 0, and A(k) is the area of the “region bound[ed]” by the functions. The question then asks why the domain of A(k) “does not include all values of k”. The intended answer was that for certain k “there would be no bounded area”.

I can understand why this question might not bother teachers, but it is an appalling question. The question jumped out at me when I skimmed the exam, and it really irritated me when I met with an Olympiad-type Methods student soon after: that student had correctly regarded the question as absurd but still answered the question, perfectly validly, on the exam: “because you already said k > 0”, or words to that effect. The student presumably didn’t get their mark for the question.


Specialist Exam 2, MCQ4

This is the infamous complex number question, with no correct answer. This, beyond all others, is the exam question that makes VCAA’s various “no errors” claims so stunning. In particular, VCAA would have known of this error a nanosecond after the exam was sat. But why lie about it? Why, even now, does the exam report claim there is a correct answer? I think the reason is pretty obvious. But we first have to deal with VCAA’s secret but likely excuse, which is quite distinct from VCAA’s probable reason.

An MCQ may have no correct answer for a minor, technical reason, meaning it may be preferable to overlook the error. Of course what counts as “minor” is a judgment call. For example, the error in MM2 MCQ14 might be considered minor enough to ignore, as VCAA did. (Even if the exam report subsequently fibbing about the exam question should not be ignored.) On the other hand, the error in NHT MM2 MCQ3 is not minor, two answers should have been deemed correct, and this did not occur. (And, the exam report fibbing about the exam question should also not be ignored.)

So, what about the 2022 complex MCQ? To answer this MCQ, one must assume that the given complex roots a and c are not real, and it seems certain that, after the fact, VCAA have declared this assumption to be “minor”: there is simply no other way to pretend that the MCQ can be answered. But it is also pretty damn certain that VCAA are lying to themselves, that they know damn well the assumption is not remotely minor. It is simply an excuse.

The real reason that VCAA graded the MCQ as if it made the intended sense is most likely because the MCQ is easy and 61% of students answered the question “correctly”. What was VCAA then to do? If they had nixed the question, giving every student (or no student) the mark, that would likely have annoyed lots of students. And there’s an argument to VCAA grading the question as they did: on average, the students who gave the “correct” answer would have understood more about the question than the other students. I wouldn’t agree with that argument, but it’s a judgment call and there’s some truth to VCAA’s grading. But of course this truth offers zero excuse for VCAA’s falsehood about the serious, fundamental error in the question.


Specialist Exam 2, MCQ19

This is the “confidence interval” question, which is completely garbled, and VCAA must have been aware of this very early on. (In brief, to compute a confidence interval we need a sample, and in particular we need to know: (a) the size of the sample; (b) the mean of the sample. But, the only size given in the MCQ is for a smaller group, and the only mean deducible is of the general distribution.)

It is less clear why VCAA decided not to nix this MCQ, particularly since only just over half of students answered the question “correctly”. Possibly, VCAA simply decided that the intended autopiloting of size+mean=interval was clear enough to justify keeping the question. It justifies nothing of the sort, but one can almost understand the thinking.


Specialist Exam 1, Q3

It now gets much more puzzling. Part (b) of this question concerns a serviced coffee machine, for which “the mean time taken to dispense 25 cups of coffee is found to be 9 seconds”. The statement is entirely unambiguous and, with reference to either part (a) or plausibility, is obviously not what was intended. But an unambiguous statement has to be respected as such and, in their grading of the question, it seems likely that VCAA failed to do so.

The exam report simply pretends the question stated what was intended to be stated (the mean time for 25 cups was 9 seconds per cup). This is fundamentally dishonest, of course, but it does not necessarily indicate how VCAA graded students who answered the question as literally written. Notwithstanding the exam report’s blunt stance, we do not know how the “literal meaning” students were graded. But, we have a clue.

I have been told (but cannot confirm) that students who answered this question correctly as written were not awarded the two marks for the question. If true, that is obviously inappropriate and entirely unfair, but it also seems plain nuts. Why not simply award proper marks for either interpretation of the question? Sure, one can argue that plausibility should trump literal meaning. But usually after stuffing up something, one tends to be a little humble, a little less arrogant about right and wrong. I really don’t understand why VCAA would have take such an arrogant approach to the grading in this situation.


Specialist Exam 2, QB6(f)

This last one completely stumps me. It is the grading of the CAN + LIQUID = TOTAL question, the problem being that VCAA didn’t declare which two variables should be considered independent. That started a (cannery) row on the exam discussion post, when a student, Names, suggested one independence assumption over the other: Names went for T = C + L on the exam, with C and L independent. The exam report, however, went for L = T – C, with T and C independent. This is way, way more presumptuous than VCAA’s handling of the coffee cup question.

Bluntly, there is absolute zero argument for VCAA retroactively declaring one independence assumption or another. The only reasonable approach to the grading is to permit either independence assumption. Which maybe they did. As with the coffee question, and notwithstanding the exam report picking a winner, we cannot be sure how VCAA graded the loser. But, as with the coffee question, we have a clue.

The student Names received his results, and on the can question he scored 1/3 from one grader and 2/3 from the other. Names was pretty sure he had answered the question correctly for his T = C + L interpretation. So in February, after the exam report came out, Names applied for an “inspection of exam materials”. Still confident, Names appealed his score on the question. The result of Names’s appeal was predictable: “the awarded marks have been confirmed”.

Of course I don’t really know anything about Names other than, um, his names. So, he could be making this all up (and maybe he didn’t really get a 41 in Specialist or score a 98.95 ATAR). But I know who I believe.

I really really don’t get it. What godly reason could VCAA have had for misgrading the can question? But there is good reason to believe they did.


Of course the entire thing could be solved by VCAA being transparent and honest. And competent. It would really help if VCAA were competent.

69 Replies to “Did VCAA Misgrade the 2022 Exams and, if so, Why?”

  1. Opinion: Multiple choice questions are easily salvaged. If there is an error, don’t count them. Far from perfect, but probably minimises harm.

    The Can question… VCAA exams have been poor on this content since it first appeared, so not entirely surprising. There are two valid approaches? Easy, check that they are both valid (even though the question is impossible as it is written) and then issue an update to exam markers. Easy fix.

    The question that bugs me is the coffee-machine. Deleting words that have been added by accident? VCAA seems to be able to do that. Adding in words THAT ALTER THE ENTIRE MEANING is not OK.

    I’m not sure there is a fair fix in this instance. Some effort to be fair would have been better than nothing of course.

    1. Thanks, RF. I think “salvaged” is too strong a word, and I think the method of salvaging is not always the same or clear. In the 2023 NHT question, there were two correct answers according to the question as written, and so these two answers should have been marked correct. For MCQ4 and MCQ19 above, I think both are debatable, but both I’d “salvage” both by nixing them.

      Both the coffee question and the can question have obvious “fixes” (while both errors are disgraceful). I’m fine with giving full marks for answers based upon the intended meaning of the coffee question.

    2. I gave feedback about errors on the Specialist Maths exams during their marking. It included my concern about how Exam 1 Question 3 part (b) would be marked. My feedback was not taken sufficiently seriously to affect the exam reports and presumably (and more importantly) the grading. If others complained, it was also to similar effect.

      1. Thanks, John. There is zero chance that VCAA were unaware of the contention around the four SM questions. And whatever VCAA said and say, there is epsilon chance that they were unaware, at the time of grading, that the questions were stuffed. What’s not known, at least to me, is how they graded the two non-MCQ and, for all four questions, why they decided to grade in the way they did.

      2. For what it is worth (epsilon) – I’m grateful to the very small number (delta) of teachers who spoke up on this and other issues.

        And I know this thanks is relatively worthless when taken in context.

        1. Is there any evidence that delta > 1?

          And of course, as always, thanks to the HoM group for doing bugger all, and to the MAV for being Quislings Incorporated.

          1. Even if \delta=2 that is still insignificant in the sample size of SM3&4 teachers at the time.

            I do wonder though what p value would be required to reject the null hypothesis…

            (Sorry, not trying to make light of a dark situation, just trying to navigate to the end of another difficult year)

            1. For whom are you objecting, the HoM or MAV? I’m happy to have you go into bat for either or both. But unless you give me reason to think otherwise, I stand by my criticism. I’m appalled by both.

                1. Of MAV and/or HoM? How do you know they are still silent? They were both, at best, collaborators for years, but I know nothing of their current stance or input.

                  1. Sorry, I was referring to the Silence of the James.

                    Re: MAV and/or HoM. I’ve seen and heard nothing that is disagreeable to your opinion.

                    1. Oh, I see. In that case you’re simply being rude and presumptuous. James had every right to complain, and is under no obligation to engage further.

                    2. OK. In that case, and with reference to your original comment, I replace my earlier comment with

                      Every “need for a comment like that please.”

  2. > On the other hand, the error in NHT MM2 MCQ3 is not minor, two answers should have been deemed correct, and this did not occur

    I just started year 2 methods, so I could be wrong about this, but it looks to me like one answer is correct, and another is technically correct. Obviously that’s a major stuff-up, but my assumption would be that most students who picked (-1,1) would have incorrectly assumed that the domain didn’t include -1 or 1. As such, I don’t think it would be fair to award them the same 1 mark awarded to students who picked [-1,1].
    I guess it’s just another case to add to the list of VCAA stuff-ups that can’t be fixed in a fair manner

    1. No. Both answers are correct.

      I agree that the closed interval can be considered the better answer: it’s maximal and it includes understanding of another idea. But there’s zero argument for penalising the open interval answer: it’s a perfectly correct answer to a straight, unambiguous question.

      1. A lot of multiple choice exams say to choose the “best” response, as opposed to the “correct” response. How would you feel about VCAA doing that, which would solve a lot of these issues?

        Obviously they can’t retroactively change that rule though, so both answers are correct this time

        1. Nah. The issue is the writers not being sufficiently mathematical and sufficiently careful (the former making the latter much more difficult). The solution is obvious.

          Plus, handing VCAA a “best response” wild card to play any time this wish would be a hilariously bad move. We can’t even get them to admit to their stuff ups now, when they’ve asked for the correct response. Imagine trying to have VCAA be responsible in Best Response Land.

          1. Once upon a time vcaa used the “E None of the above” wild card. There have been many occasions over the last decade when playing this card would have been handy. But you either have to know when to play it or play it every time. Either way has its problems.

        2. I actually think “best” opens a new can of worms and may do more harm than good.

          If two answers are correct, who decides which is the “best”?

          I see the appeal but, no.

        3. Choose the best response to the following: 1 + 1 =

          A. 0
          B. 1
          C. 5
          D. 6
          E. 11

          (Apart from the obvious wrongnesses of such a question type, who gets to decide what “best” means? A creative mind could argue any of the above options as the “best” response. And it’s just giving a Hall Pass for the writers “not being sufficiently mathematical and sufficiently careful.”)

  3. Are there monetary prizes for the top student(s)? If so then an Olympiad-level student could have sued. That would have caused an earthquake.

    Students of course are loathe to cause trouble for examiners. I was tutoring a student who was wrongly marked on a major exam, lowering her mark from nineties to seventies. Although I offered to write a polite letter for her teacher, she preferred to keep shtum.

    The most egregious example I know of was my first year Pure Maths exam at Melbourne University. The honours stream had been taught a different syllabus from that examined. That’s when I gave up on preparing for exams.

    1. There are “Premier’s awards” but no money prizes.

      Whether missing the mark meant losing a scholarship and therefore future loss of income…

      1. To the Age? Yes the effect on many students has been serious. I picked the example of a top student because that would have been easiest to prosecute.

          1. Yes. Well written and very thought-provoking for anyone willing to reflect on the process(es) both large and small.

            I do wonder when and how this will all end. Maybe it won’t.

            1. The point is, Tony writes specifically about a top student being stiffed for a prize because of VCAA’s stuff up:

              “I know of one student who failed to get the Premier’s Prize for being the top student because the official answer to the question was wrong. The student’s mark was corrected some months later, but no mention of the prize was made.”

              1. Yes. Understood.

                Although I will admit not realising the full ramifications of said issue when I first read the letter.

                Quite sad really.

  4. A few thoughts …..

    This does sound very plausible indeed and informative in explaining just how bad things are. It also reinforces the need to get exams right in the first place.

    There had been warning signs for some time that things weren’t kosher. The then CEO either didn’t think it worthwhile or thought it too hard to reform and so decided on a cover up which made things worse in really proving the case for change.

    May be worth communicating if and when you can to Minister as it emphasises the need for a deep clean out and some transparency to keep the system honest (eg publishing exams quickly together with marking scheme – hard to see why you wouldn’t publish those once all exams marked). Worth Minister understanding need for checks and balances in system and how these did not appear to work. This is then legacy stuff – he can see it as how he keeps the bureaucrats honest (as long as he is sure it can be fixed!).

    The VCAA Chair has only been on the Board since August so that’s a good sign in enabling change. Will be interesting to see who is then appointed to CEO role.

    The real need in these cases, is less for an independent review, than for a new rigorous CEO with the Minister’s backing. Then he/she can drive change from the top. Easy to create a Project Team reporting to CEO, set up a Steering Committee and go for it. Gradually new CEO appoints own people to positions.

    The last thing the Minister wants is to highlight how bad it was. If it can be fixed without a report, all the better. Focus on creating a better way, rather than the past. Yes – there will be entrenched interests, but they can be dealt with. They need to publish something but can keep it high level.

    It’s not exactly rocket-science this stuff (by that I mean the process of exam development not the level of the questions). A few phone calls around the country and overseas would show how it should be done and then implement it – obviously it’s a bit more complicated than that but not amazingly so I would have thought. You need to develop some high level principles and then flesh them out, prepare process maps etc.

    1. Do the teachers in a school get better the year after a new, and perhaps “better” principal is appointed?

      Whilst changing the managers is probably a necessary step, I doubt it will be sufficient.

        1. Principals can decide not to renew fixed term contracts. Principals can try and ‘encourage’ teachers to leave (for example giving them difficult students, tepid support, micromanaging directly or indirectly). But it is very hard to fire mediocre permanent teachers particularly in schools with a strong union presence.

          1. Distraction. RF’s analogy. If VCAA wants better people dealing with the exams, they can do that.

            None of this is hard. VCAA has not given a stuff about the quality of the maths exams for years, probably decades. If they now care, they can improve things dramatically very easily.

            1. I agree. I read that you and Burkard offered you mathematical services but were turned down. Apart from you and Burkard, has any other mathematician formally applied to be part of the exam writing process in say the last 10 years? If not then VCAA has been forced to appoint people from whatever motley crew applied. It’s very good that mathematicians have now set aside their decades of indifference and are taking an active interest in the exam writing process. This must continue rather than being a passing fad. Then if VCAA cares they really can improve things dramatically very easily AND maintain that improved standard.

              1. It depends what you mean by “mathematician”.

                Gniel said it was “fanciful” to suggest the exams weren’t checked my mathematicians. I think he was sincere, but of course he was also deluded: it is farcical to think that someone who gave the tick to the 2022 exams is a mathematician in any functional use of the term.

                It is wrong to consider that mathematicians should “apply”, in the same manner as teachers do. Mathematicians should be offered roles on the writing and the panels, and this should be managed in some manner between VCAA and high-up mathematicians.

                But, in any case, until recently I don’t think any other (real) mathematicians have put their hand up or a very long time. I think Burkard’s and my crusade might have now woken up a few.

                1. No names are needed. If Gniel simply stated the academic qualifications of the mathematicians he says are doing the checking we might all learn something interesting.

                    1. So how is VCAA to know when the real professional mathematician that offers their services is a dud?
                      (Yes I know VCAA could ring Marty or Burkard and ask but I’m sure you see the long term problem with that).

                    2. VCAA talks to the top mathematicians in the state, and trusts the top mathematicians to select or give the tick to the right people.

                    3. OK this might sound argumentative or ignorant and you don’t have to answer if it raises your blood pressure: How does VCAA know who are the top mathematicians to consult with? I can imagine a lot of self-promotion from both universities and individuals. There’s a lot at stake. (Do you foresee any sort of turf war or egotism?)

          2. All of this assumes there are others applying to fill the role(s).

            As Marty has argued at length, requiring mathematicians to apply to vet the VCAA exams is a large part of the problem.

            If people with knowledge/wisdom/experience are not applying to write/check VCAA exams then the problem continues. If said people are applying but being turned down (and I know from experience a number of good teachers who were turned down for examiner roles and some rather crap ones who were successful) then the problem will continue.

            How much the CEO/chair/manager knows/is able to do about all this I have no idea.

            1. Mathematicians do not APPLY to write or vet exams: they are REQUESTED to do so. They are doing you guys a favour if they get involved.

              This is the fundamental point. Maths ed has gone for decades thinking that mathematicians are, at best, optional. No goddam way should mathematicians now be begging to do this stuff, and no goddam way will they be.

              1. OK. Replace “applying” with “willing and able”.

                My main thoughts are:

                1. Yes, as Marty (and others) have said repeatedly, the exams were better when universities were responsible for them.

                2. No, I don’t see the current “investigation(s)” solving anything because the scope is far too limited and to acknowledge the extent of the issues seems far beyond the ability of multiple organizations.

      1. It’s a little different in the bureaucracy RedFive because you have a management chain and these days it’s very easy to move people around and get rid of them (just do a restructure) – (Yes Minister is very much historical). Also there is a clear Ministerial mandate for action and I can’t think of a school equivalent – it tends to sweep things before it.

        What is the same in any organisation is that things tend to get better slowly with excellent management but can deteriorate very quickly with bad management.

    2. Thanks, JJ. Of course “independent review” is necessary but not sufficient: Deloitte was independent. What is required is the review be set up by the Minister, independent of VCAA.

      But, yes, what you say is the real key: nothing here is rocket science. All that is required is concern, intelligence and talking to the people who do this better.

      1. Ah yes but the Minister won’t set up the review. His advisors will set it up. Advisors don’t have a great track record that I have seen. I know it’s probably an infinite regression who advisors the minister on who the advisors should be.

      2. Thanks Marty – depends what you mean by ‘independent’ and ‘review’. In my book it is sufficient that it is independent of the current VCAA – and Gniel no longer being there is a clear first step. In my view a new CEO from outside VCAA with a mandate for change is sufficiently ‘independent’. As for ‘review’, in my book, the goal is meaningful positive change, whether written before or after it takes place in a pretty report or not.

        If I were Minister, I would be wondering, if VCAA had stuffed up this big, what else apart from exams is it stuffing up? A new CEO might be given the task of wholesale change – not just exams. This may or may not lead to positive change.

        Sometimes things quietly moulder for many years until something brings puts their head over the parapet and then nothing is the same after that.

  5. It’s possible that the lawyers have been calling the shots. For example I have heard it said that the delay in publishing exams is VCAA lawyers checking copyright. How that’s a reason for maths exams and how this process failed so badly with several English exams I don’t know. How plausible it is I don’t know. Nevertheless maybe VCAA should start listening to QLD lawyers.

    1. The copyright thing is an excuse, not a reason. If NESA (NSW) can publish a maths exam in three days then so can VCAA.

      As for lawyers, I don’t think it would have been part of the original 2022 grading conversation. But once the external reviews began, it’s a good guess the lawyers became involved. Perhaps VCAA has now dug too deep.

      1. Maybe they need people to let bygones be bygones in order to start fresh. Hard to start fresh when the past keeps getting in the way. Then again maybe the piper has to be paid before bygones can be bygones.

        1. There’s an argument for that. But VCAA has to earn it. Signs of proper reform can be an implicit acknowledgment that past practices were woeful. But one has to first see the signs.

    2. Checking copyright could be done before exams are sat, even before they’re finalised and printed. We know copyright is a problem because of massive removals of content from some exams once they are uploaded online which is bad for transparency and bad for revision.

            1. Indeed. If Victoria just imported NESA’s mathematics subjects + exams, that would instantly be a massive improvement.

  6. This is a reply to a comment/question(s) from Anothermouse, above:

    “How does VCAA know who are the top mathematicians to consult with? … There’s a lot at stake. (Do you foresee any sort of turf war or egotism?)”

    There’s plenty here for at least a couple posts, but I really want to avoid writing further posts on this for now. So, here are brief (for me) answers.

    The short answer to how to evaluate mathematical expertise, or any academic expertise, is peer review: the people in Discipline X determine the expertise of the people in Discipline X. Of course such an evaluation process is circular, and problems and disputes can obviously arise, particularly in rickety and/or contentious disciplines. Peer review is far from problem free even in a truth-is-truth discipline such as mathematics, but it works well enough.

    So, one easy answer to who does VCAA seek guidance from is: the top mathematicians at Melbourne and Monash. That’s not to suggest there are not suitably strong mathematicians at other Victorian unis/institutes, or that writers/vetters cannot be chosen from other places. There are other ways to do this. But in practice, it is an easy problem to solve.

    So why hasn’t this happened already? Why isn’t there already a bunfight to take charge of the exams? That leads to answering the second question.

    Proper solid mathematicians really really don’t want to do this stuff. They really don’t want to be writing/vetting school exams, particularly for a CAS-polluted, dogshit curriculum like VCE. They have much more intellectually stimulating and professionally rewarding things to do.

    It is also not easy to do this stuff well, even for a mathematician. I know a very good mathematician who does some vetting in another state and they absolutely hate it. They are crazy-busy, they get essentially no money or professional reward for doing the vetting, and they know that they have to concentrate insanely hard to not stuff up. They do it for the one and only reason that they have the sense of public obligation to do it.

    Trust me: there will be no bunfight amongst solid mathematicians to take on any writing or vetting roles for the VCE exams. The only concern will be to have enough of these solid mathematicians display enough sense of community to be part of this. I am confident that this can be done, and the response to the open letter is decent evidence that it can be done, but it is not a gimme. The possibility of a bunfight is the least of my concerns.

    1. The other comment worth noting is that government departments have very very little contact with universities by choice these days. So it’s going against the grain to engage – that’s one reason things got like they are. Not hard at all to get academics to engage in my experience, although they are less happy once they’ve experienced it!

Leave a Reply

Your email address will not be published. Required fields are marked *

The maximum upload file size: 128 MB. You can upload: image, audio, video, document, spreadsheet, interactive, text, archive, code, other. Links to YouTube, Facebook, Twitter and other services inserted in the comment text will be automatically embedded. Drop file here