This is the home for Further Mathematics exam errors. The guidelines are given on the Methods error post, and there is also a Specialist error post.

***********************

**2021, Exam 2 (no exam or report yet available – discussed here)**

**QA(1)(f) (added 24/11/21 – discussed here)** The question asks for a minimum value to be an outlier, which, by definition of an outlier, cannot exist.

**QB(1)(2)(c) (added 24/11/21 – discussed here)** The indicated matrix *M*^{2} is not the square of the matrix *M* provided earlier in the question.

**2019, Exam 1 (Here, and report here)**

**QB(6) (added 21/09/20)** The solution requires that a Markov process is involved, although this is not stated, either in the question or in the report.

**2018 NHT, Exam 1 (Here, and report here)**

**MCQ4 (added 23/09/20)** The question provides a histogram for a continuous distribution (bird beak sizes), and asks for the “closest” of five listed values to the interquartile range. As the examination report almost acknowledges (presumably in time for the grading), this cannot be determined from the histogram; three of the listed values may be closest, depending upon the precise distribution. The report suggests one of these values as the “best” estimate, but does not rely upon this suggestion. See the comments below.

**2017 Exam 2 (Here, and report here)**

**Q1(c)(ii) (added 13/11/20) – discussed here****.** The question is fundamentally nonsense, since there are infinitely many 1 x 3 matrices L that will solve the equation. As well, the 3 x 1 matrix given in the question does not represent the total value of the three products as indicated in Q(c)(i). The examination does not acknowledge either error, but does add irony to the error by whining about students incorrectly answering with a 3 x 1 matrix.

**2017 Exam 1 (Here, and report here)**

**MCQ11 (added 13/11/20) – discussed here.** None of the available answers is correct, since seasonal indices can be negative. The examination report does not acknowledge the error.

**MCQ6 Module 2 (added 05/09/22) – discussed here.** The intention of the question is reasonably clear, but the expression “how many different ways” is, at minimum, clumsily ambiguous, and one can argue for either C or D or E being correct. The intended answer was E, but many also students answered C or D. The examination report suggests that the incorrect answers were due to “simple counting errors”, which is possible but far from definite. The report also writes “Most students answered option B, C or D”, which is contradicted by the statistics; presumably the statistics are correct and the sentence is wrong, but it is unclear.

**2015 Exam 1 (Here, and report here)**

**MCQ9 Module 2 (added 30/09/20)** The question refers to cutting a wedge of cheese to make a “similar” wedge of cheese, but the new wedge is not (mathematically) similar. The exam report states that the word “similar” was intended “in its everyday sense” but noted the confusion, albeit in a weasely, “who woulda thought?” manner. A second answer was marked correct, although only after a fight over the issue.

Thanks, Terry. This issue is common, isn’t it? Feel free to list all such occurrences that you can find.

2018 NHT Further Maths Exam 1

Module – Data Analysis Q4.

The assessor’ report states that option ABC are all accepted because:

“ The best estimate of the IQR from this histogram is 1.5, therefore option B.

Under particular conditions it would be possible for the IQR to be as small as 1.0 or as large as 2.0, therefore options A and C were also accepted.”

What are the “particular conditions”?

That is an interesting phrase to be sighted in the official exam report.

Huh. That’s pretty damn weird. Thanks, P.N., I’ll check it out.

In fact, (one of) the author of this NHT paper exposed himself somewhere.

(I am not sure whether it is a good idea to do so – shouldn’t any (previous or future) member in exam setting panel stay confidential?)

Interesting. I think it’s arguable that examiners should be permitted to remain anonymous, and perhaps wise. I’m not I see it should be mandatory.

2015 Paper 1 Geometry module Q9.

The use of the word “similar” in the question is wrong, misleading and insanely inappropriate.

Yes, the infamous cheese question. Will add soon.

This one has a history, some of which I can repeat, and some I cannot. Here is the history I can repeat.

Two weeks after the exam was held, Burkard and I and the MAV were contacted by an upset teacher, all of whose students had misinterpreted the question. The teacher had been fobbed off by the Examinations Unit, which took two weeks to come back with an idiotic, “nothing wrong” reply. The teacher was then hoping the MAV and/or the Caped Crusaders (whatever) might go in to fight.

The MAV did … wait for it … nothing.

I did not do nothing. I emailed the Examinations Unit, and also got a bullshit reply, although they indicated the Examinations Unit was, two weeks after the exam, “continuing to monitor student responses to this question”. (No such indication had been given to the teacher.) I slammed the reply for the crap that it was, and requested the contact information for the Boss. After a couple emails back and forth, the Boss indicated the resolution, as indicated in the post above.

Why did the grading get changed? It’s unclear, and I don’t want to take undue credit. But, it is clear that two things did *not* get the grading changed: (1) The Examination Unit’s knee-jerk willingess to fob off teachers; (2) The MAV’s knee-jerk willingess to do nothing.

Yes, the MAV … Once again, providing “a voice, leadership and professional support for mathematics education” (its words, not mine). Bravo.