The Oxford is Slow

Last year, Oxford University extended the length its mathematics exams from 90 to 105 minutes. Why? So that female students would perform better, relative to male students. According to the University, the problem with shorter exams is that “female candidates might be more likely to be adversely affected by time pressure”.

Hmm.

There’s good reason to be unhappy with the low percentage of female mathematics students, particularly at advanced levels. So, Oxford’s decision is in response to a genuine issue and is undoubtedly well-intentioned. Their decision, however also appears to be dumb, and it smells of dishonesty.

There are many suggestions as to why women are underrepresented in mathematics, and there’s plenty of room for thoughtful disagreement. (Of course there is also no shortage of pseudoscientific clowns and feminist nitwits.) Unfortunately, Oxford’s decision appears to be more in the nature of statistical manipulation than meaningful change.

Without more information, and the University has not been particularly forthcoming, it is difficult to know the effects of this decision. Reportedly, the percentage of female first class mathematics degrees awarded by Oxford increased from 21% in 2016 to 39% last year, while male firsts increased marginally to 47%. Oxford is presumably pleased, but without detailed information about score distributions and grade cut-offs it is impossible to understand what is underlying those percentages. Even if otherwise justified, however, Oxford’s decision constitutes deliberate grade inflation, and correspondingly its first class degree has been devalued.

The reported defences of Oxford’s decision tend only to undermine the decision. It seems that when the change was instituted last (Northern) summer, Oxford provided no rationale to the public. It was only last month, after The Times gained access to University documents under FOI, that the true reasons became known publicly. It’s a great way to sell a policy, of course, to be legally hounded into exposing your reasons.

Sarah Hart, a mathematician at the University of London, is quoted by The Times in support of longer exams: “Male students were quicker to answer questions, she said, but were more likely to get the answer wrong”. And, um, so we conclude what, exactly?

John Banzhaf, a prominent public interest lawyer, is reported as doubting Oxford’s decision could be regarded as “sexist”, since the extension of time was identical for male and female candidates. This is hilariously legalistic from such a politically wise fellow (who has some genuine mathematical nous).

The world is full of policies consciously designed to hurt one group or help another, and many of these policies are poorly camouflaged by fatuous “treating all people equally” nonsense. Any such policy can be good or bad, and well-intentioned or otherwise, but such crude attempts at camouflage are never honest or smart. The stated purpose of Oxford’s policy is to disproportionally assist female candidates; there are arguments for Oxford’s decision and one need not agree with the pejorative connotations of the word, but the policy is blatantly sexist.

Finally, there is the fundamental question of whether extending the exams makes them better exams. There is no way that someone unfamiliar with the exams and the students can know for sure, but there’s reasons to be sceptical. It is in the nature of most exams that there is time pressure. That’s not perfect, and there are very good arguments for other forms of assessment in mathematics. But all assessment forms are artificial and/or problematic in some significant way. And an exam is an exam. Presumably the maths exams were previously 90 minutes for some reason, and in the public debate no one has provided any proper consideration or critique of any such reasons.

The Times quotes Oxford’s internal document in support of the policy: “It is thought that this [change in exam length] might mitigate the . . . gender gap that has arisen in recent years, and in any case the exam should be a demonstration of mathematical understanding and not a time trial.” 

This quote pretty much settles the question. No one has ever followed “and in any case” with a sincere argument.

Polynomialy Perverse

What, with its stupid curriculastupid texts and really monumentally stupid exams, it’s difficult to imagine a wealthy Western country with worse mathematics education than Australia. Which is why God gave us New Zealand.

Earlier this year we wrote about the first question on New Zealand’s 2016 Level 1 algebra exam:

A rectangle has an area of  \bf x^2+5x-36. What are the lengths of the sides of the rectangle in terms of  \bf x.

Obviously, the expectation was for the students to declare the side lengths to be the linear factors x – 4 and x + 9, and just as obviously this is mathematical crap. (Just to hammer the point, set x = 5, giving an area of 14, and think about what the side lengths “must” be.)

One might hope that, having inflicted this mathematical garbage on a nation of students, the New Zealand Qualifications Authority would have been gently slapped around by a mathematician or two, and that the error would not be repeated. One might hope this, but, in these idiot times, it would be very foolish to expect it.

A few weeks ago, New Zealand maths education was in the news (again). There was lots of whining about “disastrous” exams, with “impossible” questions, culminating in a pompous petition, and ministerial strutting and general hand-wringing. Most of the complaints, however, appear to be pretty trivial; sure, the exams were clunky in certain ways, but nothing that we could find was overly awful, and nothing that warranted the subsequent calls for blood.

What makes this recent whining so funny is the comparison with the deafening silence in September. That’s when the 2017 Level 1 Algebra Exams appeared, containing the exact same rectangle crap as in 2016 (Question 3(a)(i) and Question 2(a)(i)). And, as in 2016, there is no evidence that anyone in New Zealand had the slightest concern.

People like to make fun of all the sheep in New Zealand, but there’s many more sheep there than anyone suspects.

Fixations and Madness

Our sixth and final post on the 2017 VCE exam madness is on some recurring nonsense in Mathematical Methods. The post will be relatively brief, since a proper critique of every instance of the nonsense would be painfully long, and since we’ve said it all before.

The mathematical problem concerns, for a given function f, finding the solutions to the equation

    \[\boldsymbol{(1)\qquad\qquad f(x) \ = \ f^{-1}(x)\,.}\]

This problem appeared, in various contexts, on last month’s Exam 2 in 2017 (Section B, Questions 4(c) and 4(i)), on the Northern Hemisphere Exam 1 in 2017 (Questions 8(b) and 8(c)), on Exam 2 in 2011 (Section 2, Question 3(c)(ii)), and on Exam 2 in 2010 (Section 2, Question 1(a)(iii)).

Unfortunately, the technique presented in the three Examiners’ Reports for solving equation (1) is fundamentally wrong. (The Reports are here, here and here.) In synch with this wrongness, the standard textbook considers four misleading examples, and its treatment of the examples is infused with wrongness (Chapter 1F). It’s a safe bet that the forthcoming Report on the 2017 Methods Exam 2 will be plenty wrong.

What is the promoted technique? It is to ignore the difficult equation above, and to solve instead the presumably simpler equation

    \[ \boldsymbol{(2) \qquad\qquad  f(x) \ = \  x\,,}\]

or perhaps the equation

    \[\boldsymbol{(2)' \qquad\qquad f^{-1}(x)\ = \ x \,.}\]

Which is wrong.

It is simply not valid to assume that either equation (2) or (2)’ is equivalent to (1). Yes, as long as the inverse of f exists then equation (2)’ is equivalent to equation (2): a solution x to (2)’ will also be a solution to (2), and vice versa. And, yes, then any solution to (2) and (2)’ will also be a solution to (1). The converse, however, is in general false: a solution to (1) need not be a solution to (2) or (2)’.

It is easy to come up with functions illustrating this, or think about the graph above, or look here.

OK, the VCAA might argue that the exams (and, except for a couple of up-in-the-attic exercises, the textbook) are always concerned with functions for which solving (2) or (2)’ happens to suffice, so what’s the problem? The problem is that this argument would be idiotic.

Suppose that we taught students that roots of polynomials are always integers, instructed the students to only check for integer solutions, and then carefully arranged for the students to only encounter polynomials with integer solutions. Clearly, that would be mathematical and pedagogical crap. The treatment of equation (1) in Methods exams, and the close to universal treatment in Methods more generally, is identical.

OK, the VCAA might continue to argue that the students have their (stupifying) CAS machines at hand, and that the graphs of the particular functions under consideration make clear that solving (2) or (2)’ suffices. There would then be three responses:

(i) No one tests whether Methods students do anything like a graphical check, or anything whatsoever.

(ii) Hardly any Methods students do do anything. The overwhelming majority of students treat equations (1), (2) and (2)’ as automatically equivalent, and they have been given explicit license by the Examiners’ Reports to do so. Teachers know this and the VCAA knows this, and any claim otherwise is a blatant lie. And, for any reader still in doubt about what Methods students actually do, here’s a thought experiment: imagine the 2018 Methods exam requires students to solve equation (1) for the function f(x) = (x-2)/(x-1), and then imagine the consequences.

(iii) Even if students were implicitly or explicitly arguing from CAS graphics, “Look at the picture” is an absurdly impoverished way to think about or to teach mathematics, or pretty much anything. The power of mathematics is to be able take the intuition and to either demonstrate what appears to be true, or demonstrate that the intuition is misleading. Wise people are wary of the treachery of images; the VCAA, alas, promotes it.

The real irony and idiocy of this situation is that, with natural conditions on the function f, equation (1) is equivalent to equations (2) and (2)’, and that it is well within reach of Methods students to prove this. If, for example, f is a strictly increasing function then it can readily be proved that the three equations are equivalent. Working through and applying such results would make for excellent lessons and excellent exam questions.

Instead, what we have is crap. Every year, year after year, thousands of Methods students are being taught and are being tested on mathematical crap.

The Madness of Crowd Models

Our fifth and penultimate post on the 2017 VCE exam madness concerns Question 3 of Section B on the Northern Hemisphere Specialist Mathematics Exam 2. The question begins with the logistic equation for the proportion P of a petrie dish covered by bacteria:

    \[\boldsymbol{\frac{{\rm d} P}{{\rm d} t\ }= \frac{P}{2}\left(1 - P\right)\,\qquad 0 < P < 1\,.}\]

This is not a great start, since it’s a little peculiar using the logistic equation to model an area proportion, rather than a population or a population density. It’s also worth noting that the strict inequalities on P are unnecessary and rule out of consideration the equilibrium (constant) solutions P = 0 and P = 1.

Clunky framing aside, part (a) of Question 3 is pretty standard, requiring the solving of the above (separable) differential equation with initial condition P(0) = 1/2. So, a decent integration problem trivialised by the presence of the stupifying CAS machine. After which things go seriously off the rails.

The setting for part (b) of the question has a toxin added to the petri dish at time t = 1, with the bacterial growth then modelled by the equation

    \[\boldsymbol{\frac{{\rm d} P}{{\rm d} t\ }= \frac{P}{2}\left(1 - P\right) - \frac{\sqrt{P}}{20}\,.}\]

Well, probably not. The effect of toxins is most simply modelled as depending linearly on P, and there seems to be no argument for the square root. Still, this kind of fantasy modelling is par for the VCAA‘s crazy course. Then, however, comes Question 3(b):

Find the limiting value of P, which is the maximum possible proportion of the Petri dish that can now be covered by the bacteria.

The question is a mess. And it’s wrong.

The Examiners’ “Report” (which is not a report at all, but merely a list of short answers) fails to indicate what students did or how well they did on this short, 2-mark question. Presumably the intent was for students to find the limit of P by finding the maximal equilibrium solution of the differential equation. So, setting dP/dt = 0 implies that the right hand side of the differential equation is also 0. The resulting equation is not particularly nice, a quartic equation for Q = √P. Just more silly CAS stuff, then, giving the largest solution P = 0.894 to the requested three decimal places.

In principle, applying that approach here is fine. There are, however, two major problems.

The first problem is with the wording of the question: “maximum possible proportion” simply does not mean maximal equilibrium solution, nor much of anything. The maximum possible proportion covered by the bacteria is P = 1. Alternatively, if we follow the examiners and needlessly exclude = 1 from consideration, then there is no maximum possible proportion, and P can just be arbitrarily close to 1. Either way, a large initial P will decay down to the maximal equilibrium solution.

One might argue that the examiners had in mind a continuation of part (a), so that the proportion begins below the equilibrium value and then rises towards it. That wouldn’t rescue the wording, however. The equilibrium solution is still not a maximum, since the equilibrium value is never actually attained. The expression the examiners are missing, and possibly may even have heard of, is least upper bound. That expression is too sophisticated to be used on a school exam, but whose problem is that? It’s the examiners who painted themselves into a corner.

The second issue is that it is not at all obvious – indeed it can easily fail to be true – that the maximal equilibrium solution for P will also be the limiting value of P. The garbled information within question (b) is instructing students to simply assume this. Well, ok, it’s their question. But why go to such lengths to impose a dubious and impossible-to-word assumption, rather than simply asking directly for an equilibrium solution?

To clarify the issues here, and to show why the examiners were pretty much doomed to make a mess of things, consider the following differential equation:

    \[\boldsymbol{\frac{{\rm d} P}{{\rm d} t\ }= 3P - 4P^2 - \sqrt{P}\,.}\]

By setting Q = √P, for example, it is easy to show that the equilibrium solutions are P = 0 and P = 1/4. Moreover, by considering the sign of dP/dt for P above and below the equilibrium P = 1/4, it is easy to obtain a qualitative sense of the general solutions to the differential equation:

In particular, it is easy to see that the constant solution P = 1/4 is a semi-stable equilibrium: if P(0) is slightly below 1/4 then P(t) will decay to the stable equilibrium P = 0.

This type of analysis, which can readily be performed on the toxin equation above, is simple, natural and powerful. And, it seems, non-existent in Specialist Mathematics. The curriculum  contains nothing that suggests or promotes any such analysis, nor even a mention of equilibrium solutions. The same holds for the standard textbook, in which for, for example, the equation for Newton’s law of cooling is solved (clumsily), but there’s not a word of insight into the solutions.

And this explains why the examiners were doomed to fail. Yes, they almost stumbled into writing a good, mathematically rich exam question. The paper thin curriculum, however, wouldn’t permit it.

 

A Madness for all Seasons

Our fourth post on the  2017 VCE exam madness will be similar to our previous post: a quick whack of a straight-out error. This error was flagged by a teacher friend, David. (No, not that David.)

The 11th multiple choice question on the first Further Mathematics Exam reads as follows:

Which one of the following statistics can never be negative? 

A. the maximum value in a data set

B. the value of a Pearson correlation coefficient

C. the value of a moving mean in a smoothed time series

D. the value of a seasonal index

E. the value of a slope of a least squares line fitted to a scatterplot

Before we get started, a quick word on the question’s repeated use of the redundant “the value of”.

Bleah!

Now, on with answering the question.

It is pretty obvious that the statistics in A, B, C and E can all be negative, so presumably the intended answer is D. However, D is also wrong: a seasonal index can also be negative. Unfortunately the explanation of “seasonal index” in the standard textbook is lost in a jungle of non-explanation, so to illustrate we’ll work through a very simple example.

Suppose a company’s profits and losses over the four quarters of a year are as follows:

    \[ \begin{tabular} {| c | c | c | c |}\hline {\bf\phantom{S}Summer \phantom{I}} &{\bf\phantom{S}Autumn \phantom{I}} &{\bf\phantom{S}Winter \phantom{I}} &{\bf\phantom{S}Spring \phantom{I}} \\  \hline {\bf \$6000} & {\bf -\$1000} & {\bf -\$2000} & {\bf \$5000}\\ \hline \end{tabular}\]

So, the total profit over the year is $8,000, and then the average quarterly profit is $2000. The seasonal index (SI) for each quarter is then that quarter’s profit (or loss) divided by the average quarterly profit:

    \[ \begin{tabular} {| c | c | c | c |}\hline {\bf Summer SI} &{\bf Autumn SI} &{\bf Winter SI} &{\bf Spring SI} \\  \hline {\bf 3} & {\bf -0.5} & {\bf -1.0} & {\bf 2.5}\\ \hline \end{tabular}\]

Clearly this example is general, in the sense that in any scenario where the seasonal data are both positive and negative, some of the seasonal indices will be negative. So, the exam question is not merely technically wrong, with a contrived example raising issues: the question is wrong wrong.

Now, to be fair, this time the VCAA has a defense. It appears to be more common to apply seasonal indices in contexts where all the data are one sign, or to use absolute values to then consider magnitudes of deviations. It also appears that most or all examples Further students would have studied included only positive data.

So, yes, the VCAA (and the Australian Curriculum) don’t bother to clarify the definition or permitted contexts for seasonal indices. And yes, the definition in the standard textbook implicitly permits negative seasonal indices. And yes, by this definition the exam question is plain wrong. But, hopefully most students weren’t paying sufficient attention to realise that the VCAA weren’t paying sufficient attention, and so all is ok.

Well, the defense is something like that. The VCAA can work on the wording.

 

Further Madness

Our third post on the 2017 VCE exam madness will be brief, on a question containing a flagrant error.

The first question in the matrix module of Further Mathematics’ Exam 2 is concerned with a school canteen selling pies, rolls and sandwiches over three separate weeks. The number of items sold is set up as a 3 x 3 matrix, one row for each week and one column for each food choice. The last part, (c)(ii), of the question then reads:

The matrix equation below shows that the total value of all rolls and sandwiches sold in these three weeks is $915.60 

    \[   \boldsymbol{L \times\begin{bmatrix} 491.55 \\ 428.00\\ 487.60 \end{bmatrix} \ = \ [915.60]}\]

Matrix L in this equation is of order 1 x 3.

Write down matrix L.

This 1-mark question is presumably meant to be a gimme, with answer L = [0 1 1]. Unfortunately the question is both weird and wrong. (And lacking in punctuation. Guys, it’s not that hard.) The wrongness comes from the examiners having confused their rows and columns. As is made clear in the the previous part, (c)(i), of the question, the  3 x 1 matrix of numbers indicates the total earnings from each of the three weeks, not from each of the three food choices. So, the equation indicates the total value of all products sold in weeks 2 and 3.

There’s not much to say about such an obvious error. It is very easy to confuse rows and columns, and we’ve all done it on occasion, but if VCAA’s vetting cannot catch this kind of mistake then it cannot be relied upon to catch anything. The only question is how the Examiners’ Report will eventually address the error. The VCAA is well-practised in cowardly silence and weasel-wording, but it would be exceptionally Trumplike to attempt such tactics here.

Error aside, the question is artificial, and it is not clear that the matrix equation “shows” much of anything. Yes, 0-1 and on-or-off matrices are important and useful, but the use of such a matrix in this context is contrived and confusing. Not a hanging offence, and benign by VCAA’s standards, but the question is pretty silly. And, not forgetting, wrong.

There’s Madness in the Methods

Yes, we’ve used that title before, but it’s a damn good title. And there is so much madness in Mathematical Methods to cover. And not only Methods. Victoria’s VCE exams are coming to an end, the maths exams are done, and there is all manner of new and astonishing nonsense to consider. This year, the Victorian Curriculum and Assessment Authority have outdone themselves.

Over the next week we’ll put up a series of posts on significant errors in the 2017 Methods, Specialist Maths and Further Maths exams, including in the mid-year Northern Hemisphere examsBy “significant error” we mean more than just a pointless exercise in button-pushing, or tone-deaf wording, or idiotic pseudomodelling, or aimless pedantry, all of which is endemic in VCE maths exams. A “significant error” in an exam question refers to a fundamental mathematical flaw with the phrasing, or with the intended answer, or with the (presumed or stated) method that students were supposed to use. Not all the errors that we shall discuss are large, but they are all definite errors, they are errors that would have (or at least should have) misled some students, and none of these errors should have occurred. (It is courtesy of diligent (and very annoyed) maths teachers that I learned of most of these questions.) Once we’ve documented the errors, we’ll post on the reasons that the errors are so prevalent, on the pedagogical and administrative climate that permits and encourages them.

Our first post concerns Exam 1 of Mathematical Methods. In the final question, Question 9, students consider the function \boldsymbol{ f(x) =\sqrt{x}(1-x)} on the closed interval [0,1], pictured below. In part (b), students are required to show that, on the open interval (0,1), “the gradient of the tangent to the graph of f” is (1-3x)/(2\sqrt{x}). A clumsy combination of calculation and interpretation, but ok. The problem comes when students then have to consider tangents to the graph.

In part (c), students take the angle θ in the picture to be 45 degrees. The pictured tangents then have slopes 1 and -1, and the students are required to find the equations of these two tangents. And therein lies the problem: it turns out that the “derivative”  of f is equal to -1 at the endpoint x = 1. However, though the natural domain of the function \sqrt{x}(1-x)} is [0,∞), the students are explicitly told that the domain of f is [0,1].

This is obvious and unmitigated madness.

Before we hammer the madness, however, let’s clarify the underlying mathematics.

Does the derivative/tangent of a suitably nice function exist at an endpoint? It depends upon who you ask. If the “derivative” is to exist then the standard “first principles” definition must be modified to be a one-sided limit. So, for our function f above, we would define

    \[f'(1) = \lim_{h\to0^-}\frac{f(1+h) - f(1)}{h}\,.\]

This is clearly not too difficult to do, and with this definition we find that f'(1) = -1, as implied by the Exam question. (Note that since f naturally extends to the right of =1, the actual limit computation can be circumvented.) However, and this is the fundamental point, not everyone does this.

At the university level it is common, though far from universal, to permit differentiability at the endpoints. (The corresponding definition of continuity on a closed interval is essentially universal, at least after first year.) At the school level, however, the waters are much muddier. The VCE curriculum and the most popular and most respected Methods textbook appear to be completely silent on the issue. (This textbook also totally garbles the related issue of derivatives of piecewise defined (“hybrid”) functions.) We suspect that the vast majority of Methods teachers are similarly silent, and that the minority of teachers who do raise the issue would not in general permit differentiability at an endpoint.

In summary, it is perfectly acceptable to permit derivatives/tangents to graphs at their endpoints, and it is perfectly acceptable to proscribe them. It is also perfectly acceptable, at least at the school level, to avoid the issue entirely, as is done in the VCE curriculum, by most teachers and, in particular, in part (b) of the Exam question above.

What is blatantly unacceptable is for the VCAA examiners to spring a completely gratuitous endpoint derivative on students when the issue has never been raised. And what is pure and unadulterated madness is to spring an endpoint derivative after carefully and explicitly avoiding it on the immediately previous part of the question.

The Victorian Curriculum and Assessment Authority has a long tradition of scoring own goals. The question above, however, is spectacular. Here, the VCAA is like a goalkeeper grasping the ball firmly in both hands, taking careful aim, and flinging the ball into his own net.

The Treachery of Images

Harry scowled at a picture of a French girl in a bikini. Fred nudged Harry, man-to-man. “Like that, Harry?” he asked.

“Like what?”

“The girl there.”

“That’s not a girl. That’s a piece of paper.”

“Looks like a girl to me.” Fred Rosewater leered.

“Then you’re easily fooled,” said Harry. It’s done with ink on a piece of paper. That girl isn’t lying there on the counter. She’s thousands of miles away, doesn’t even know we’re alive. If this was a real girl, all I’d have to do for a living would be to stay at home and cut out pictures of big fish.”

                       Kurt Vonnegut, God Bless you, Mr. Rosewater

 

It is fundamental to be able to distinguish appearance from reality. That it is very easy to confuse the two is famously illustrated by Magritte’s The Treachery of Images (La Trahison des Images):

The danger of such confusion is all the greater in mathematics. Mathematical images, graphs and the like, have intuitive appeal, but these images are mere illustrations of deep and easily muddied ideas. The danger of focussing upon the image, with the ideas relegated to the shadows, is a fundamental reason why the current emphasis on calculators and graphical software is so misguided and so insidious.

Which brings us, once again, to Mathematical Methods. Question 5 on Section Two of the second 2015 Methods exam is concerned with the function V:[0,5]\rightarrow\Bbb R, where

\phantom{\quad}  V(t) = de^{\frac{t}3} + (10-d)e^{\frac{-2t}3}\,.

Here, d \in (0,10) is a constant, with d=2 initially; students are asked to find the minimum (which occurs at t = \log_e8), and to graph V. All this is par for the course: a reasonable calculus problem thoroughly trivialised by CAS calculators. Predictably, things get worse.

In part (c)(i) of the problem students are asked to find “the set of possible values of d” for which the minimum of V occurs at t=0. (Part (c)(ii) similarly, and thus boringly and pointlessly, asks for which d the minimum occurs at t=5). Arguably, the set of possible values of d is (0,10), which of course is not what was intended; the qualification “possible” is just annoying verbiage, in which the examiners excel.

So, on to considering what the students were expected to have done for (c)(ii), a 2-mark question, equating to three minutes. The Examiners’ Report pointedly remarks that “[a]dequate working must be shown for questions worth more than one mark.” What, then, constituted “adequate working” for 5(c)(i)? The Examiners’ solution consists of first setting V'(0)=0 and solving to give d=20/3, and then … well, nothing. Without further comment, the examiners magically conclude that the answer to (c)(i) is 20/3 \leqslant d< 10.

Only in the Carrollian world of Methods could the examiners’ doodles be regarded as a summary of or a signpost to any adequate solution. In truth, the examiners have offered no more than a mathematical invocation, barely relevant to the question at hand: why should V having a stationary point at t=0 for d=20/3 have any any bearing on V for other values of d? The reader is invited to attempt a proper and substantially complete solution, and to measure how long it takes. Best of luck completing it within three minutes, and feel free to indicate how you went in the comments.

It is evident that the vast majority of students couldn’t make heads or tails of the question, which says more for them than the examiners. Apparently about half the students solved V'(0)=0 and included d = 20/3 in some form in their answer, earning them one mark. Very few students got further; 4% of students received full marks on the question (and similarly on (c)(ii)).

What did the examiners actually hope for? It is pretty clear that what students were expected to do, and the most that students could conceivably do in the allotted time, was: solve V'(0)=0 (i.e. press SOLVE on the machine); then, look at the graphs (on the machine) for two or three values of d; then, simply presume that the graphs of V for all d are sufficiently predictable to “conclude” that 20/3 is the largest value of d for which the (unique) turning point of V lies in [0,5]. If it is not immediately obvious that any such approach is mathematical nonsense, the reader is invited to answer (c)(i) for the function W:[0,5]\rightarrow\Bbb R where W(t) = (6-d)t^2 + (d-2)t.

Once upon a time, Victorian Year 12 students were taught mathematics, were taught to prove things. Now, they’re taught to push buttons and to gaze admiringly at pictures of big fish.

NAPLAN’s Mathematical Nonsense, and What it Means for Rural Peru

The following question appeared on Australia’s Year 9 NAPLAN Numeracy Test in 2009:

y = 2x – 1

y = 3x + 2

Which value of x satisfies both of these equations?

It is a multiple choice question, but unfortunately “The question is completely stuffed” is not one of the available answers.

Of course the fundamental issue with simultaneous equations is the simultaneity. Both equations and both variables must be considered as a whole, it simply making no sense to talk about solutions for x without reference to y. Unless y = -7 in the above equations, and there is no reason to assume that, then no value of x satisfies both equations. The NAPLAN question is way beyond bad.

It is always worthwhile pointing out NAPLAN nonsense, as we’ve done before and will continue to do in the future. But what does this have to do with rural Peru?

In a recent post we pointed out an appalling question from a nationwide mathematics exam in New Zealand. We flippantly remarked that one might expect such nonsense in rural Peru but not in a wealthy Western country such as New Zealand. We were then gently slapped in the comments for the Peruvian references: Josh queried whether we knew anything of Peru’s educational system; and, Dennis questioned the purpose of bringing up Peru, since Australia’s NAPLAN demonstrates a “level of stupidity” for all the World to see. These are valid points.

It would have been prudent to have found out a little about Peru before posting, but we seem to be safe. Peru’s economy has been growing rapidly but is not nearly as strong as New Zealand’s or Australia’s. Peruvian school education is weak, and Peru seems to have no universities comparable to the very good universities in New Zealand and Australia. Life and learning in rural Peru appears to be pretty tough.

None of this is surprising, and none of it particularly matters. Our blog post referred to “rural Peru or wherever”. The point was that we can expect poorer education systems to throw up nonsense now and then, or even typically; in particular, lacking ready access to good and unharried mathematicians, it is unsurprising if exams and such are mathematically poor and error-prone.

But what could possibly be New Zealand’s excuse for that idiotic question? Even if the maths ed crowd didn’t know what they were doing, there is simply no way that a competent mathematician would have permitted that question to remain as is, and there are plenty of excellent mathematicians in New Zealand. How did a national exam in New Zealand fail to be properly vetted? Where were the mathematicians?

Which brings us to Australia and to NAPLAN. How could the ridiculous problem at the top of this post, or the question discussed here, make it into a nationwide test? Once again: where were the mathematicians?

One more point. When giving NAPLAN a thoroughly deserved whack, Dennis was not referring to blatantly ill-formed problems of the type above, but rather to a systemic and much more worrying issue. Dennis noted that NAPLAN doesn’t offer a mathematics test or an arithmetic test, but rather a numeracy test. Numeracy is pedagogical garbage and in the true spirit of numeracy, NAPLAN’s tests include no meaningful evaluation of arithmetic or algebraic skills. And, since we’re doing the Peru thing, it seems worth noting that numeracy is undoubtedly a first world disease. It is difficult to imagine a poorer country, one which must weigh every educational dollar and every educational hour, spending much time on numeracy bullshit.

Finally, a general note about this blog. It would be simple to write amusing little posts about this or that bit of nonsense in, um, rural Peru or wherever. That, however, is not the purpose of this blog. We have no intention of making easy fun of people or institutions honestly struggling in difficult circumstances; that includes the vast majority of Australian teachers, who have to tolerate and attempt to make sense of all manner of nonsense flung at them from on high. Our purpose is to point out the specific idiocies of arrogant, well-funded educational authorities that have no excuse for screwing up in the manner in which they so often do.

Factoring in the Stupidity

It is very brave to claim that one has found the stupidest maths exam question of all time. And the claim is probably never going to be true: there will always be some poor education system, in rural Peru or wherever, doing something dumber than anything ever done before. For mainstream exams in wealthy Western countries, however, New Zealand has come up with something truly exceptional.

Last year, New Zealand students at Year 11 sat one of two algebra exams administered by the New Zealand Qualifications Authority. The very first question on the second exam reads:

A rectangle has an area of  \bf x^2+5x-36. What are the lengths of the sides of the rectangle in terms of  \bf x.

The real problem here is to choose the best answer, which we can probably all agree is sides of length \pi and (x^2+5x-36)/\pi.

OK, clearly what was intended was for students to factorise the quadratic and to declare the factors as the sidelengths of the rectangle. Which is mathematical lunacy. It is simply wrong.

Indeed, the question would arguably still have been wrong, and would definitely still have been awful, even if it had been declared that x has a unit of length: who wants students to be thinking that the area of a rectangle uniquely determines its sidelengths? But, even that tiny sliver of sense was missing.

So, what did students do with this question? (An equivalent question, 3(a)(i), appeared on the first exam.) We’re guessing that, seeing no alternative, the majority did exactly what was intended and factorised the quadratic. So, no harm done? Hah! It is incredible that such a question could make it onto a national exam, but it gets worse.

The two algebra exams were widely and strongly criticised, by students and teachers and the media. People complained that the exams were too difficult and too different in style from what students and teachers had been led to expect. Both types of criticism may well have been valid. For all of the public criticism of the exams, however, we could find no evidence of the above question or its Exam 1 companion being flagged. Plenty of complaining about hard questions, plenty of complaining about unexpected questions, but not a word about straight out mathematical crap.

So, not only do questions devoid of mathematical sense appear on a nationwide exam. It then appears that the entire nation of students is being left to accept that this is what mathematics is: meaningless autopilot calculation. Well done, New Zealand. You’ve made the education authorities in rural Peru feel very much better about themselves.