This one is old, which is not in keeping with the spirit of our PoSWWs and WitCHes. And, we’ve already written on it and talked about it. But, as the GOAT PoSWW, it really deserves its own post. It is an exercise from the textbook Heinemann Maths Zone 9 (2011), which does not appear to still exist. (And yes, the accompanying photo appeared alongside the question in the text book.)
The questions below come from something called Essential Assessment and, to be upfront, the questions are somewhat misleading. To give EA some micro-credit, not all their questions were this bad, even if plenty more that we’d seen could have been posted. So, EA is not quite as bad as these questions suggest.
On the other hand, EA, like pretty much all teaching-replacement software, appears be utterly aimless and, thus, utterly pointless and, thus, much worse than pointless.
As David Post wrote about here and then here, Paxton’s original motion claimed powerful statistical evidence, giving “substantial reason to doubt the voting results in the Defendant States” (paragraphs 9 – 12). In particular, Paxton claimed that Trump’s early lead in the voting was statistically insurmountable (par 10):
“The probability of former Vice President Biden winning the popular vote in the four Defendant States—Georgia, Michigan, Pennsylvania, and Wisconsin—independently given President Trump’s early lead in those States as of 3 a.m. on November 4, 2020, is less than one in a quadrillion, or 1 in 1,000,000,000,000,000.”
Similarly, Paxton looked to Trump’s defeat of Clinton in 2016 to argue the unlikelihood of Biden’s win in these states (par 11):
“The same less than one in a quadrillion statistical improbability … exists when Mr. Biden’s performance in each of those Defendant States is compared to former Secretary of State Hilary Clinton’s performance in the 2016 general election and President Trump’s performance in the 2016 and 2020 general elections.”
On the face of it, these claims are, well, insane. So, what evidence did Paxton produce? It appeared in Paxton’s subsequent motion for expedited consideration, in the form of a Declaration to the Court by “Charles J. Cicchetti, PhD” (pages 20-29). Cicchetti’s Declaration has to be read to be believed.
Cicchetti‘s PhD is in economics, and he is a managing director of a corporate consulting group called Berkeley Research Group. BRG appears to have no role in Paxton’s suit, and Cicchetti doesn’t say how he got involved; he simply writes that he was “asked to analyze some of the validity and credibility of the 2020 presidential election in key battleground states”. Presumably, Paxton was just after the best.
It is excruciating to read Cicchetti’s entire Declaration, but there is also no need. Amongst all the Z-scores and whatnot, Cicchetti’s argument is trivial. Here is the essence of Cicchetti’s support for Paxton’s statements above.
In regard to Trump’s early lead, Cicchetti discusses Georgia, comparing the early vote and late vote distributions (par 15):
“I use a Z-score to test if the votes from the two samples are statistically similar … There is a one in many more than quadrillions of chances that these two tabulation periods are randomly drawn from the same population.
Similarly, in regard to Biden outperforming Clinton in the four states, Cicchetti writes
“I tested the hypothesis that the performance of the two Democrat candidates were statistically similar by comparing Clinton to Biden … [Cicchetti sprinkles some Z-score fairy dust] … I can reject the hypothesis many times more than one in a quadrillion times that the two outcomes were similar.”
And, as David Post has noted, that’s all there is. Cicchetti has demonstrated that the late Georgia votes skewed strongly to Biden, and that Biden outperformed Clinton. Both of which everybody knew was gonna happen and everybody knows did happen.
None of this, of course, supports Paxton’s claims in the slightest. So, was Cicchetti really so stupid as to think he was proving anything? No, Cicchetti may be stupid but he’s not that stupid; Cicchetti briefly addresses the fact that his argument contains no argument. In regard to the late swing in Georgia, Cicchetti writes (par 16)
“I am aware of some anecdotal statements from election night that some Democratic strongholds were yet to be tabulated … [This] could cause the later ballots to be non-randomly different … but I am not aware of any actual [supporting] data …”
Yep, it’s up to others to demonstrate that the late votes went to Biden. Which, you know they kind of did, when they counted the fucking votes. As for Biden outperforming Clinton, Cicchetti writes (par 13),
“There are many possible reasons why people vote for different candidates. However, I find the increase of Biden over Clinto is statistically incredible if the outcomes were based on similar populations of voters …”
Yep, Cicchetti finds it “incredible” that four years of that motherfucker Trump had such an effect on how people voted.
The question below is from the second 2020 Specialist exam (not online), and was raised by commenter Red Five in the discussion here. This’ll probably turn into a WitCH but, really, the question is so damn stupid, it doesn’t deserve the honour.
MCQ4 (added 23/09/20) The question provides a histogram for a continuous distribution (bird beak sizes), and asks for the “closest” of five listed values to the interquartile range. As the examination report almost acknowledges (presumably in time for the grading), this cannot be determined from the histogram; three of the listed values may be closest, depending upon the precise distribution. The report suggests one of these values as the “best” estimate, but does not rely upon this suggestion. See the comments below.
Q1(c)(ii) (added 13/11/20) – discussed here. The question is fundamentally nonsense, since there are infinitely many 1 x 3 matrices L that will solve the equation. As well, the 3 x 1 matrix given in the question does not represent the total value of the three products as indicated in Q(c)(i). The examination does not acknowledge either error, but does add irony to the error by whining about students incorrectly answering with a 3 x 1 matrix.
MCQ9 Module 2 (added 30/09/20) The question refers to cutting a wedge of cheese to make a “similar” wedge of cheese, but the new wedge is not (mathematically) similar. The exam report states that the word “similar” was intended “in its everyday sense” but noted the confusion, albeit in a weasely, “who woulda thought?” manner. A second answer was marked correct, although only after a fight over the issue.
Q10(c) (added 13/11/20) – discussed here.The intended solution requires computing a doubly improper integral, which is beyond the scope of the subject. The examination report ducks the issue, by providing only an answer, with no accompanying solution.
Q3(b) (added 13/11/20) – discussed here. The wording of the question is fundamentally flawed, since the “maximum possible proportion” of the function does not exist here, and in any case need not be equal to the “limiting value” of the function. The examination “report” contains nothing but the intended answer.
MCQ20 (added 24/09/20) The notation refers to the forces in the question being asked, and seemingly also in the diagram for the question, but to the magnitudes of these forces in the suggested answers. The examination report doesn’t acknowledge the error.
We’re not really ready to embark upon this post, but it seems best to get it underway ASAP, and have commenters begin making suggestions.
It seems worthwhile to have all the Mathematical Methods exam errors collected in one place: this is to be the place.*
Our plan is to update this post as commenters point out the exam errors, and so slowly (or quickly) we will compile a comprehensive list.
To be as clear as possible, by “error”, we mean a definite mistake, something more directly wrong than pointlessness or poor wording or stupid modelling. The mistake can be intrinsic to the question, or in the solution as indicated in the examination report; examples of the latter could include an insufficient or incomplete solution, or a solution that goes beyond the curriculum. Minor errors are still errors and will be listed.
With each error, we shall also indicate whether the error is (in our opinion) major or minor, and we’ll indicate whether the examination report acknowledges the error, updating as appropriate. Of course there will be judgment calls, and we’re the boss. But, we’ll happily argue the tosses in the comments.
Q9(c), Section B (added 13/11/20) – discussed here. The question contains a fundamentally misleading diagram, and the solution involves the derivative of a function at the endpoint of a closed interval, which is beyond the scope of the course. The examination report is silent on on both issues.
Q3(h), Section B (added 06/10/20) – discussed here. This is the error that convinced us to start this blog. The question concerns a “probability density function”, but with integral unequal to 1. As a consequence, the requested “mean” (part (i)) and “median” (part (ii)) make no definite sense.
There are three natural approaches to defining the “median” for part (ii), leading to three different answers to the requested two decimal places. Initially, the examination report acknowledged the issue, while weasely avoiding direct admission of the fundamental screw-up; answers to the nearest integer were accepted. A subsequent amendment, made over two years later, made the report slightly more honest, although the term “screw-up” still does not appear.
As noted in the comment and update to this post, the “mean” in part (i) is most naturally defined in a manner different to that suggested in the examination report, leading to a different answer. The examination report still fails to acknowledge any issue with part (i).
Q4(c), Section B (added 25/09/20) The solution in the examination report sets up (but doesn’t use) the equation dy/dx = stuff = 0, instead of the correct d/dx(stuff) = 0.
MCQ4 (added 21/09/20) – discussed here. The described function need not satisfy any of the suggested conditions, as discussed here. The underlying issue is the notion of “inflection point”, which was (and is) undefined in the syllabus material. The examination report ignores the issue.
Q7(b) (added 23/09/20)The question asks students to “find p“, where is the probability that a biased coin comes up heads, and where it turns out that . The question is fatally ambiguous, since there is no definitive answer to whether is possible for a “biased coin”.
The examination report answer includes both values of , while also noting “The cancelling out of p was rarely supported; many students incorrectly [sic] assumed that p could not be 0.” The implication, but not the certainty, is that although 0 was intended as a correct answer, students who left out or excluded 0 could receive full marks IF they explicitly “supported” this exclusion.
This is an archetypal example of the examiners stuffing up, refusing to acknowledge their stuff up, and refusing to attempt any proper repair of their stuff up. Entirely unprofessional and utterly disgraceful.
Yesterday, Bach had an op-ed in the official organ of the Liberal Party (paywalled, thank God). Titled We must raise our grades on teacher quality, Bach’s piece was the predictable mix of obvious truth and poisonous nonsense, promoting the testing of “numeracy” and so forth. One line, however, stood out as a beacon of Bachism:
“But, as in any profession, a small number of teachers is not up to the mark.”