Deloitte, QCAA and NESA

This is the first part of a double-post. It was initially one long(er) post, but it seemed preferable to split off the background, on aspects of the Deloitte review, into its own post, which is below. The substance proper is in the next post. I’ve also written on the Deloitte story here, but there are new details in the following.

The Deloitte review was triggered by Burkard’s and my complaint to VCAA in November 2022, over what we claimed were errors in, and the general poverty of, the 2022 VCE mathematics exams. In response to our complaint, VCAA requested that Burkard and I “provide all [our] concerns in writing, identifying the serious errors, and [our] concerns regarding the structure, development and vetting processes”, stating that VCAA intended to pass on our written concerns to “an external provider for review and analysis”. In December 2022, Burkard and I composed our critique and forwarded it to VCAA. We framed our critique with a list of our concerns about VCAA’s proposed review (pp 1-2), concerns that subsequently – more accurately, anteriorly – turned out to be fully justified. As we learned much later, VCAA had already approached Deloitte to be the “external provider”.

In March 2023, VCAA informed me and Burkard that Deloitte had been appointed to conduct the review, and that “[t]he objective of the review is to ascertain whether VCAA acted in accordance with its policies and procedures in relation to the writing of the exams.” Burkard and I quickly replied, noting the absurd irrelevance of this objective. VCAA responded in May, indicating that “the VCAA and Deloitte may consider the use of another maths expert to assess the accuracy and quality of [exam] question content”. Here, “accuracy” can be taken as an inaccurate term for “validity”, but more confusing is the phrase “another maths expert”: given that we had been provided no evidence of a previous “maths expert”, whatever that might be, it was unclear how there could be “another”. This may have simply been clumsy wording but it is possible the phrase was intentionally cunning or unintentionally revealing. As Burkard and I later learned, VCAA/Deloitte had already engaged their “maths expert[s]”.

From the get-go, Deloitte’s review was intended to be off the point, focussing on “policies and procedures” and “the exam development process”, rather than on the exams themselves. Nonetheless, VCAA/Deloitte had no choice but to at least pay lip service to Burkard’s and my substantive claims regarding the 2022 exams. Given that Deloitte lacks the expertise to perform any such analysis, and accepting as a given the absurdity of Deloitte being involved at all, an engagement of third party “maths experts” could have made some sense. In practice, it was more nonsense, although, as it turns out, important nonsense.

In April 2023, VCAA engaged the state education authorities QCAA and NESA to analyse the exam questions, although it is not remotely clear what “QCAA” and “NESA” mean in this context. The analyses were performed by three people, two associated with QCAA and one with NESA, but we have zero knowledge of the positions or qualifications or expertise of these people. The people are at different points described as “subject matter experts” and “mathematics/maths specialists” but of course, in the hands of VCAA, these phrases are entirely meaningless. Whoever these people were, and whatever their intrinsic merits, the choice of “QCAA” and “NESA” is hopelessly clouded while being undoubtedly perverting.

Along with the lack of information on the “specialists”, Deloitte indicates nothing of how the reviewing task was framed for QCAA and NESA, beyond that QCAA was asked to review all 2022 exam questions while NESA was asked to review only those questions flagged in Burkard’s and my critique (along with some Further Mathematics questions included for unknown reasons, p 22). Such framing is of critical importance, with different framings likely to result in dramatically different outcomes; note, for example, the very thoughtful and detailed framing outlined in the Bennett Report (pp 41-44).

The perversion in choosing QCAA and NESA is obvious but must be emphasised. In assessing the validity of mathematics exam questions one requires, first and foremost, a strong academic mathematician, a person entirely comfortable with both the examined mathematics and the underlying mathematical foundations. Of course, it is always possible for an education authority to have, or to have easy access to, such a mathematician, but for VCAA/Deloitte to hope for this is a thoroughly ridiculous, needlessly long-odds gamble. If not, and much more likely, a deliberate farce. Beyond the blatant absurdity, there is a further perverting aspect: colleagues will always be loath to criticise colleagues. State education bodies have strong professional connections, which would have provided NESA and QCAA a powerful incentive to pull their punches when assessing the work of VCAA. There is simply no reason to presume that this did not occur.

Nonetheless, QCAA and NESA were what VCAA/Deloitte wanted and so it is what they got. It was a cunningly bad choice but it turned out to not be sufficiently bad.

4 Replies to “Deloitte, QCAA and NESA”

  1. I finally got around to reading the Deloitte review.
    You rightfully slam VCAA for setting the terms of reference of the review to not address the mathematical issues. However, the process and transparency aspects are important and the review did land some punches – I like the table comparing NESA and QCAA transparency to VCAA. There are a lot of “Act Now” recommendations that are meant to take affect this year that address the inadequacies in the exam and marking guide development – and in their timely release after the exam.

    It is strange how the QCAA reviewer found no issues with the paper except some minor points on mark distribution and clarity of writing. While the NESA reviewer was told to focus on just the questions you highlighted so was clearly looking for problems (and reads your blog!)

    Anyway, the thing that kind of annoyed me was how they kept referring to you and Burkhard as “Mr” instead of “Dr” and “Prof” respectively… they didn’t want their own maths experts and refused to acknowledge those that had reached out.

    1. Thanks, Simon. Yeah, Deloitte had some reasonable process and transparency stuff. But it’s mighty small beer compared to the critically necessary recommendation: “Don’t have your maths exams written or vetted by know-nothing twats”.

      I don’t think it is at all strange that QCAA screwed up, for a number of reasons.

      First of all, and as I wrote, the framing of the review request makes a massive difference. You throw six exams at two people and say “Have a look at these, see what you think”, is massively different from giving a much smaller selection of questions, presumably selected for some reason, even if the basis of the selection has not been declared. It is really worth reading what Bennett did: I have some churlish quibbles, which i plan to write about, but what he did was really impressive.

      Secondly, as I wrote, QCAA is going to be reluctant to criticise their VCAA colleagues, particularly if the task has been presented as a “Have a look at this” exercise, which it almost certainly was.

      Thirdly, as I didn’t write, QCAA is incompetent. Look at the QLD exams and your heart will sink, just as it does in Victoria. And again, the guys who reviewed for Deloitte are not even clearly the guys writing the appalling exams: they could well be the “maths specialists” within QCAA itself. Think of the “maths specialists” within VCAA and you’ll get the idea. Yes, QCAA has demonstrated a concern for integrity entirely foreign to VCAA, and they have demonstrated *some* concern for proper vetting, but they are still incompetent.

      I also noted the “Mr. Ross” and so forth. I can’t stand an academic who insists on the use of their title, which usually indicates nothing except that they have a small penis. But in a report like Deloitte it’s pretty sloppy and disrespectful.

      1. Yeah, it is pretty naff to insist on people using your title – but in a formal report like this it is disrespectful to get it wrong, especially as it was supplied in your submission included in the appendix.

        I don’t know much about QCAA exams – but after 50 years of no exams it’s not too surprising if they have some wobbles.
        However, I did have a flick through their curriculum and l think I prefer their content choices in GM and SM (MM is similar) – and I do like some of the details in their curriculum docs and appreciate that their curricula and exams are all released as Creative Commons [BY] – free to use even for commercial use (provided you give attribution) no applying for permission or paying fees like VCAA and NESA. Also, internal assessment for year 12 is one investigation and two exams – much better than endless SACs.

        Re a possible reluctance “to criticise their VCAA colleagues” – maybe. But if they demonstrate a lack of mathematical understanding in their review of the questions then it also reflects poorly on them. As you say, I guess it does depend on what they were asked. Note that the QCAA reviewer used language that reflects their required balancing of exam questions (see page 40 of the SM Syllabus) so maybe they were just focussed on the overall structure of the papers.

        Just read the Bennett review – it is more detailed and has more strength in its recommendations but is pretty similar in theme. I like how they added control questions into to ones submitted for expert review – a nice touch.

        1. Anybody who fails to flag the complex MCQ question as absolutely stuffed should not be let anywhere near senior mathematics (or sharp objects).

Leave a Reply

Your email address will not be published. Required fields are marked *

The maximum upload file size: 128 MB. You can upload: image, audio, video, document, spreadsheet, interactive, text, archive, code, other. Links to YouTube, Facebook, Twitter and other services inserted in the comment text will be automatically embedded. Drop file here