This combo WitCH comes courtesy of mystery correspondent, tjrb. They flagged three multiple choice questions from the 2018 Algorithmics exam (here, and examination report here), and we’ve added a fourth. tjrb also remarks, “There are probably a lot more errors in this paper (and the other algorithmics papers), but these were the most strikingly incorrect”.
For Q2, the examination report indicates that 41% of students gave the intended answer of A. By way of explanation, the report then remarks,
“Cobham theorised that problems that are feasibly computable (also known as easy problems) are those that are decidable in polynomial time.”
For Q6, the report indicate that both A (51%) and C (33%) were “accepted”, but is otherwise silent.
The report is silent on Q12 and Q16, except to indicate the intended answers: C (94%) and A (66%), respectively.
The question below is from the first Methods exam (not online), held a few days ago, and which we’ll write upon more generally very soon. The question was brought to our attention by frequent commenter Red Five, and we’ve been pondering it for a couple days; we’re not sure whether it’s sufficient for a WitCH, or is a PoSWW, or is just a little silly. But, whatever it is, it’s pretty annoying, so what the hell.
OK, we’ll get back into this slowly, and let others do the work. (Yes, at some point soon we’ll write about the seventy million knuckle-draggers who voted for Trump.)
Fewer Australians are taking advanced maths in Year 12. We can learn from countries doing it better.
To be fair, and making our way past the pithy title, we’re not sure the article is crap: we’re just not sure what it is. See how you go.
This WitCH is from Cambridge’s 2020 textbook, Mathematical Methods, Unit 1 & 2. It is the closing summary of Chapter 21A, Estimating the area under a graph. (It is followed by 21B, Finding the exact area: the definite integral.)
We’re somewhat reluctant about this one, since it’s not as bad as some other WitCHes. Indeed, it is a conscious attempt to do good; it just doesn’t succeed. It came up in a tutorial, and it was sufficiently irritating there that we felt we had no choice.
QB6 (added 21/09/20) The solution requires that a Markov process is involved, although this is not stated, either in the question or in the report.
MCQ4 (added 23/09/20) The question provides a histogram for a continuous distribution (bird beak sizes), and asks for the “closest” of five listed values to the interquartile range. As the examination report almost acknowledges (presumably in time for the grading), this cannot be determined from the histogram; three of the listed values may be closest, depending upon the precise distribution. The report suggests one of these values as the “best” estimate, but does not rely upon this suggestion. See the comments below.
Q1(c)(ii) (added 13/11/20) – discussed here. The question is fundamentally nonsense, since there are infinitely many 1 x 3 matrices L that will solve the equation. As well, the 3 x 1 matrix given in the question does not represent the total value of the three products as indicated in Q(c)(i). The examination does not acknowledge either error, but does add irony to the error by whining about students incorrectly answering with a 3 x 1 matrix.
MCQ11 (added 13/11/20) – discussed here. None of the available answers is correct, since seasonal indices can be negative. The examination report does not acknowledge the error.
MCQ9 Module 2 (added 30/09/20) The question refers to cutting a wedge of cheese to make a “similar” wedge of cheese, but the new wedge is not (mathematically) similar. The exam report states that the word “similar” was intended “in its everyday sense” but noted the confusion, albeit in a weasely, “who woulda thought?” manner. A second answer was marked correct, although only after a fight over the issue.
Q10(c) (added 13/11/20) – discussed here. The intended solution requires computing a doubly improper integral, which is beyond the scope of the subject. The examination report ducks the issue, by providing only an answer, with no accompanying solution.
Q3(b) (added 13/11/20) – discussed here. The wording of the question is fundamentally flawed, since the “maximum possible proportion” of the function does not exist here, and in any case need not be equal to the “limiting value” of the function. The examination “report” contains nothing but the intended answer.
MCQ20 (added 24/09/20) The notation refers to the forces in the question being asked, and seemingly also in the diagram for the question, but to the magnitudes of these forces in the suggested answers. The examination report doesn’t acknowledge the error.
We’re not really ready to embark upon this post, but it seems best to get it underway ASAP, and have commenters begin making suggestions.
It seems worthwhile to have all the Mathematical Methods exam errors collected in one place: this is to be the place.*
Our plan is to update this post as commenters point out the exam errors, and so slowly (or quickly) we will compile a comprehensive list.
To be as clear as possible, by “error”, we mean a definite mistake, something more directly wrong than pointlessness or poor wording or stupid modelling. The mistake can be intrinsic to the question, or in the solution as indicated in the examination report; examples of the latter could include an insufficient or incomplete solution, or a solution that goes beyond the curriculum. Minor errors are still errors and will be listed.
With each error, we shall also indicate whether the error is (in our opinion) major or minor, and we’ll indicate whether the examination report acknowledges the error, updating as appropriate. Of course there will be judgment calls, and we’re the boss. But, we’ll happily argue the tosses in the comments.
Get to work!
Q9(c), Section B (added 13/11/20) – discussed here. The question contains a fundamentally misleading diagram, and the solution involves the derivative of a function at the endpoint of a closed interval, which is beyond the scope of the course. The examination report is silent on on both issues.
Q3(h), Section B (added 06/10/20) – discussed here. This is the error that convinced us to start this blog. The question concerns a “probability density function”, but with integral unequal to 1. As a consequence, the requested “mean” (part (i)) and “median” (part (ii)) make no definite sense.
There are three natural approaches to defining the “median” for part (ii), leading to three different answers to the requested two decimal places. Initially, the examination report acknowledged the issue, while weasely avoiding direct admission of the fundamental screw-up; answers to the nearest integer were accepted. A subsequent amendment, made over two years later, made the report slightly more honest, although the term “screw-up” still does not appear.
As noted in the comment and update to this post, the “mean” in part (i) is most naturally defined in a manner different to that suggested in the examination report, leading to a different answer. The examination report still fails to acknowledge any issue with part (i).
Q4(c), Section B (added 25/09/20) The solution in the examination report sets up (but doesn’t use) the equation dy/dx = stuff = 0, instead of the correct d/dx(stuff) = 0.
Q5(b)(i) (added 24/09/20) The solution in the examination report gives the incorrect expression in the working, rather than the correct .
Q5(c) (added 13/11/20) – discussed here. The method suggested in the examination report is fundamentally invalid.
MCQ4 (added 21/09/20) – discussed here. The described function need not satisfy any of the suggested conditions, as discussed here. The underlying issue is the notion of “inflection point”, which was (and is) undefined in the syllabus material. The examination report ignores the issue.
Q4, Section 2 (added 23/09/20) The vertex of the parabola is incorrectly labelled (-1,0), instead of (0,-1). The error is not acknowledged in the examination report.
Q7(b) (added 23/09/20) The question asks students to “find p“, where is the probability that a biased coin comes up heads, and where it turns out that . The question is fatally ambiguous, since there is no definitive answer to whether is possible for a “biased coin”.
The examination report answer includes both values of , while also noting “The cancelling out of p was rarely supported; many students incorrectly [sic] assumed that p could not be 0.” The implication, but not the certainty, is that although 0 was intended as a correct answer, students who left out or excluded 0 could receive full marks IF they explicitly “supported” this exclusion.
This is an archetypal example of the examiners stuffing up, refusing to acknowledge their stuff up, and refusing to attempt any proper repair of their stuff up. Entirely unprofessional and utterly disgraceful.
MCQ12 (added 26/09/20) Same as in the 2014 Exam 2, above: the described function need not satisfy any of the suggested conditions, as discussed here.