# VCAA’s Greater Literary Offenses

The difficulty of critiquing VCAA mathematics exams is capturing the variety and the frequency and the depth of the flaws, and then summing the overall effect, the fundamentally impoverished approach to mathematics and its testing. Documenting straight out errors is not overly difficult, and even non sequitur questions are manageable: the error or weirdness typically speaks for itself. Capturing the ubiquitous awfulness of the writing, and the intrinsic meaninglessness of many of the questions, however, is harder.

Late last year, we took a shot at VCAA’s lesser literary offences, the sentence by sentence absurdities. We did so by considering pretty much every sentence on one VCE exam, Methods 2022 Exam 1. In this post, we shall consider, in similarly painful detail, one of VCAA’s greater literary offences.

A Methods or Specialist (CAS) Exam 2 consists of 20 multiple choice questions followed by five or six long questions, each with many parts. The multiple choice questions, along with the shorter questions comprising Exam 1, provide numerous clear and contained examples of lesser literary offences. These lesser offences also occur on the long questions, of course, but many of these longer questions are also offences as a whole.

A long mathematics question should be long for a reason. If there are many parts then these parts should be related and should be building a deeper analysis. In brief, a long mathematics question should be the framework for a story.

A mathematical story need not be great art, of course. Even less does such a story need a real-world scenario, and even less than that does it need dressing up with character or colour. As such, critiquing a mathematical story is not the same as critiquing fiction. But there is overlap.

In his hilarious Feminore Cooper piece, Mark Twain lists eighteen rules of storytelling. The final eight, “little” rules were the focus of our lesser offenses post. Of the ten larger rules, most do not readily apply here. But the first two rules, which Twain frames in regard to Cooper’s novel The Deerslayer, are directly relevant:

1. That a tale shall accomplish something and arrive somewhere. But the “Deerslayer” tale accomplishes nothing and arrives in air.

2. They require that the episodes in a tale shall be necessary parts of the tale, and shall help to develop it. But as the “Deerslayer” tale is not a tale, and accomplishes nothing and arrives nowhere, the episodes have no rightful place in the work, since there was nothing for them to develop.

We shall now go through a question from 2022 Methods Exam 2, part by part. The relevance of the above two rules will be obvious.

The exam question contains ten parts and is worth in total 14 marks (= 21 minutes). The question begins:

And, already we’re bored to tears. And confused, since a “binomial random variable” can only be declared once we first declare the number of flips. But the boringness is the main point. Over forty words are used, when seven would have sufficed:

An unbiased coin is flipped five times.

We do not need Mika, and we do not need the random variable X, at least not for a good while.

OK, sure. You could have asked for the probability that the coin comes up heads every/five times, but sure.

Again, words are better than X but, again, OK.

Yes, a reasonable progression of the story, but why “three decimal places”? There are some nice fractions already sitting there, ingredients for a simpler, better, exact answer. What’s the point of the detour into decimals?

This has nothing to do with the earlier three parts, but OK. And now, yes, you need X, so now is when you should introduce X.

Anyway, we are done with part (a). How, then, will part (b) further the story? Well,

Sweet Jesus. What a sentence. What a scenario. What a disjunction.

Just in case we didn’t realise that indicates a definite integral, which thus has a value.

Is anyone else pondering why we might be given that?

Why? Why find r and s? Who could possibly care? What does it tell us about the height of Mika’s flip, or Mika’s binomialling, or anything whatsoever? Where are you taking us? Can we opt out?

On to part (c):

It was always too much to hope for, that Mika might be an only child.

The correct answer is “Who gives a toss?”, for which no justification is required.

We’re pretty confident that three decimal places is pretty pointless.

So, we’re told the sample mean is 0.4, and then we decide how many flips this sample will contain. A miracle.

And thus the story ends. Like The Deerslayer, the story has accomplished nothing and, like The Deerslayer, it ends in air.

VCAA’s story is depressing and painful in its pointlessness. It hurts to read. We did, however, think of an apt name for the story: The Cheerslayer.

## 22 Replies to “VCAA’s Greater Literary Offenses”

1. Anonymous says:

The reason that it’s to three decimal places is because the examiners fully expect and encourage students to use the binomPDF, binomCDF, and similar functions on their CASes to solve those parts, which from memory return results as decimals.

1. marty says:

Thanks, A. There’s not much reason in that reason.

1. SRK says:

Yes this attitude is mystifying – is it really that bad to have to type in the binomial coefficients and powers of fractions? I guess it might be for something like n = 15, p = 13/17, x ≤ 7. But then don’t write questions like that?

1. marty says:

Is it really that bad to avoid typing altogether, to simplify (13/16 – 1/32)/(31/32) in your head?

God, I hate this. Thanks, David. Thanks, Kaye.

2. Red Five says:

And if the examiners REALLY wanted to test CAS skills they would test the Menu->Number->Approximate to Fraction button pushing skills of students to give the answer as a fraction…

“Who gives a toss” – Marty, this is one of your better lines, and that IS saying something.

1. marty says:

Yeah, sometimes I’m proud of myself.

2. Anonymous says:

The problem with that is obviously that approximating to a fraction is not always accurate and it might differ from calculator to calculator.

But VCAA probably doesn’t care about that, the real (in their eyes) problem is probably that not all students will have the calculator set to give the maximum amount of decimal points, and something like approximate to fraction (probably, I haven’t touched my CAS in a while and I’d rather not recharge the thing) isn’t too useful when you’ve got your CAS set to only give 4 or 5 decimal points.

3. Tungsten says:

Since memory does not need to be cleared, I would point out that it is far from impossible to define a function in the CAS that gives the exact values needed. Why the inbuilt functions cannot do exact values I do not know.

1. marty says:

Why the inbuilt brains cannot do exact values, I do not know.

1. Anonymous says:

Marty, you need to learn to love the bomb, I mean microchip.

1. marty says:

Stop with the YouTube thing. Occasionally, I don’t mind. Too frequently is boring and annoying.

2. Anonymous says:

My guess is because these functions probably do have some use in an industrial setting, or just for general use, where the results being inexact doesn’t matter too much (and might be preferable), and it’s rather silly to have exact results for very large values of n. I have from time to time calculated a confidence interval here or there (outside of VCE), which is a legitimate use of the CAS, though I think I just used some website instead.

I don’t think VCAA wants to acknowledge the fact that it’s possible to program the CAS, because if they acknowledged that, logically, they would have to acknowledge that 90% of the CAS questions are even more meaningless then they are already. And also, they would have to acknowledge how biased VCE maths are to student who go to rich private schools, or tutors, who can give them said programs, which many students simply don’t have access to. While it isn’t very difficult for a decently knowledgeable programmer to create a CAS program, I think it’s still somewhat out of reach for most students to do directly. Downloading the program that their tutor sent them or downloading it from a LMS is not, however.

2. Glen says:

Possibly this comment should be on the other post?

1. marty says:

Damn. Thanks, Glen. I’ll delete and put it where it belongs.

3. Anonymous says:

It is hard for me to get a sense of how frequent the errors are, or of how they compare to other tests. (Why I asked about the comparison to AP testing, and thank you, for the response.)

Also, while the gotcha mistakes are fun (and distressing), I wonder if the bigger issues are (a) coverage, to include calculator pushing and (b) question wording.

On (b), I have noticed the same thing from College Board of having questions that seem to reward students with more verbal skill, versus just conventional math problem solving chops. That or you just need to train a bit, to get used to how the test makers write (and think).

I guess, I would sort of just prefer a test that was a random selection of conventional problems from Thomas Finney or even Granville. It’s almost like the test makers want to show off, with how they write the tests. I agree they are…eh…interesting looking. But…just feels convoluted versus a straight examination. If the kids can rock a straight examination, then so what? Take it as a victory.

Don’t feel some need to sort them more (in a way that turns into a bit of an IQ test, versus just a subject test.) The advantages for high IQ kids are many, in our school dominated culture. They’ll make out all right. And if they really want to get sorted more for best of the best, they can go into math competitions. That’ll sort them fine. Or maybe they can work on their other subjects and just take math as a win and done.

1. marty says:

“gotcha”. Jesus.

2. aps says:

Agreed that plainly written classic problems would be better. Probably VCAA thinks that if the content was too learnable, and the exams clearly and straightforwardly tested it, too many students would do well and they wouldn’t have a nice normal distribution, centred around a score of 50-60% on the exams, to translate into study scores.

I don’t think there’s anything wrong with testing raw intelligence a bit if that comes in the form of mathematical problem solving. I suppose this is partly because problem solving is a practiceable/learnable skill as well, and that senior maths should be pushing students to work on it. But even if we suppose it’s just natural skill: students who are better at maths can score higher in maths. That sounds fair to me.

But then of course, the main point: having sat this exam myself, I really don’t think this manner of narrative demands IQ/intelligence. In contrast, it demands you leave any and all critical thinking behind. Applying intelligence to this question surely means wondering ‘why is it impossible for the coin to be thrown less than 1.5 metres in the air’ and ‘why might this distribution be a quadratic’ and ‘WHY does this need to be rounded to 3 decimal places’. But of course there’s no time for any of this, and it’s never relevant to the question at hand.

To do well in this exam you need to train yourself into a kind of selective blindness, where you completely ignore any and all details about the supposed ‘application’ of a question. VCAA thinks this is an actual skill! They really do! It’s called ‘synthesising’ or ‘processing’ or ‘extracting the maths’ or something. And sure, this can require a non-trivial degree of reading comprehension. Enough to really stump some EAL kids, and a good number of others. But I wouldn’t say it’s sorting people based on IQ.

Still, we can agree that the wording + narrative is unnecessarily bad.

1. Anonymous says:

You will always get a spread of results no matter how many plainly written classic questions are given and Im sure VCAA knows this. Just like you can probably always pull over any car on the road and find it unroadworthy. It is teachers who write the exams and you only have to look at questions written by most teachers in most schools to see what the problem is. But which comes first the chicken or the egg?

1. marty says:

You have no idea who writes these exams.

2. Student says:

The last point about synthesising is imo the biggest issue with VCE mathematics. As someone who was a high achieving student last year (thankfully managed to only lose 3 or 4 marks on last year’s exam one despite the flaws and none on exam two), and was surrounded by those who achieved highly, I really believe if the questions weren’t cloaked in such perverse unintelligible dribble you wouldn’t get shock exam reports each year revealing easy questions only answered correctly by 30% of the state, and it would become quite apparent that the exams are, if boiled down to the actual essence of each question, unacceptably routine, computational, repetitive sets of the same questions devoid of any need for problem-solving or any deeper conceptual understanding. Students are taught to think twice about anything they read, questioning whether the rare clearly-communicated question contains some trick. Some schools (and a small number of the commercially available trial papers) have meaningful and rigorous investigations that follow a cohesive path of increasing generality and difficulty. From the perspective of a student it certainly seems VCAA exam writers seek to subvert those taking their exam in an attempt to create apparent “difficulty”, and if this distorted “VCAA vs students” approach to creating assessments was dropped (along with the nitpicking, senseless marking schemes and cheap tricks through which it was manifest) a much more rewarding and respectable paper could be written.

1. marty says:

Great comment, S, and congratulations on surviving VCE mathematics, and more.

I would disagree on one point. You suggest that the exam writers are aware of their subversion, but I don’t believe that. The exam questions and their grading are madness, of course. But my sense is that the writers are of the genuine opinion that they are offering meaningful, testing questions. I can’t prove it, but I think this is the case.

4. Anonymous says:

Part (b) (iii) is the most gratuitous question ive seen in many years. I can force myself to swallow the other parts including the height of the coin toss but (b) (iii) takes the biscuit.
The report is interesting too. For example for Q3(b)(ii) the value of c is given as -2.783 (recurring on the 3) = -167/60. Why give the recurring decimal? Q3(b)(i) was probably asked to help students with (ii) but clearly its asking for trouble (it should have been anticipated that some students would give a value in terms of a, b and c. Not technically wrong?) It should have been deleted and the mark given to (ii). This seems the same as the 20×20 area question that was probably asked to help students but all it did was let some students lose a mark they might have got. Less help might be more helpful?

You ask who gives a toss about (c)(i). Maybe we should all give a toss because I think its deeply troubling that 1/3 of students didnt get the correct answer (or maybe they did but VCAA didnt like their justification?) But VCAA thinks that 1/3 of students getting zero means the question was answered well!

I think the story should be called the Beerslayer because it makes me want to drink huge amounts of beer to forget about it.

This site uses Akismet to reduce spam. Learn how your comment data is processed.