OK, after a long period of dealing with other stuff (shovelled on by the evil Mathologer), we’re back. There’s a big backlog, and in particular we’re working hard to find an ounce of sense in Gonski, Version N. But, first, there’s a competition to finalise, and an associated educational authority to whack.
It appears that no one pays any attention to Western Australian maths education. This, as we’ll see, is a good thing. (Alternatively, no one gave a stuff about the prize, in which case, fair enough.) So, congratulations to Number 8, who wins by default. We’ll be in touch.
A reminder, the competition was to point out the nonsense in Part 1 and Part 2 of the 2017 West Australian Mathematics Applications Exam. As with our previous challenge, this competition was inspired by one specifically awful question. The particular Applications question, however, should not distract from the Exam’s very general clunkiness. The entire Exam is amateurish, as one rabble rouser expressed it, plagued by clumsy mathematics and ambiguous phrasing.
The heavy lifting in the critique below is due to the semi-anonymous Charlie. So, a very big thanks to Charlie, specifically for his detailed remarks on the Exam, and more generally for not being willing to accept that a third rate exam is simply par for WA’s course. (Hello, Victorians? Anyone there? Hello?)
We’ll get to the singularly awful question, and the singularly awful formal response, below. First, however, we’ll provide a sample of some of the examiners’ lesser crimes. None of these other crimes are hanging offences, though some slapping wouldn’t go astray, and a couple questions probably warrant a whipping. We won’t go into much detail; clarification can be gained by referring to the Exam papers. We also don’t address the Exam as a whole in terms of the adequacy of its coverage of the Applications curriculum, though there are apparently significant issues in this regard.
Question 1, the phrasing is confusing in parts, as was noted by Number 8. It would have been worthwhile for the examiners to explicitly state that the first term Tn corresponds to n = 1. Also, when asking for the first term ( i.e. the first Tn) less than 500, it would have helped to have specifically asked for the corresponding index n (which is naturally obtained as a first step), and then for Tn.
Question 2(b)(ii), it is a little slack to claim that “an allocation of delivery drivers cannot me made yet”.
Question 5 deals with a survey, a table of answers to a yes-or-no question. It grates to have the responses to the question recorded as “agree” or “disagree”. In part (b), students are asked to identify the explanatory variable; the answer, however, depends upon what one is seeking to explain.
Question 6(a) is utterly ridiculous. The choice for the student is either to embark upon a laborious and calculator-free and who-gives-a-damn process of guess-and-check-and-cross-your-fingers, or to solve the travelling salesman problem.
Question 8(b) is clumsily and critically ambiguous, since it is not stated whether the payments are to be made at the beginning or the end of each quarter.
Question 10 involves some pretty clunky modelling. In particular, starting with 400 bacteria in a dish is out by an order of magnitude, or six.
Question 11(d) is worded appallingly. We are told that one of two projects will require an extra three hours to compete. Then we have to choose which project “for the completion time to be at a minimum”. Yes, one can make sense of the question, but it requires a monster of an effort.
Question 14 is fundamentally ambiguous, in the same manner as Question 8(b); it is not indicated whether the repayments are to be made at the beginning or end of each period.
That was good fun, especially the slapping. But now it’s time for the main event:
Question 3(a) concerns a planar graph with five faces and five vertices, A, B, C, D and E:
What is wrong with this question? As evinced by the graphs pictured above, pretty much everything.
As pointed out by Number 8, Part (i) can only be answered (by Euler’s formula) if the graph is assumed to be connected. In Part (ii), it is weird and it turns out to be seriously misleading to refer to “the” planar graph. Next, the Hamiltonian cycle requested in Part (iii) is only guaranteed to exist if the graph is assumed to be both connected and simple. Finally, in Part (iv) any answer is possible, and the answer is not uniquely determined even if we restrict to simple connected graphs.
It is evident that the entire question is a mess. Most of the question, though not Part (iv), is rescued by assuming that any graph should be connected and simple. There is also no reason, however, why students should feel free or obliged to make that assumption. Moreover, any such reading of 3(a) would implicitly conflict with 3(b), which explicitly refers to a “simple connected graph” three times.
So, how has WA’s Schools Curriculum and Standards Authority subsequently addressed their mess? This is where things get ridiculous, and seriously annoying. The only publicly available document discussing the Exam is the summary report, which is an accomplished exercise in saying nothing. Specifically, this report makes no mention of the many issues with the Exam. More generally, the summary report says little of substance or of interest to anyone, amounting to little more than admin box-ticking.
The first document that addresses Question 3 in detail is the non-public graders’ Marking Key. The Key begins with the declaration that it is “an explicit statement about [sic] what the examining panel expect of candidates when they respond to particular examination items.” [emphasis added].
What, then, are the explicit expectations in the Marking Key for Question 3(a)? In Part (i) Euler’s formula is applied without comment. For Part (ii) a sample graph is drawn, which happens to be simple, connected and semi-Eulerian; no indication is given that other, fundamentally different graphs are also possible. For Part (iii), a Hamiltonian cycle is indicated for the sample graph, with no indication that non-Hamiltonian graphs are also possible. In Part (iv), it is declared that “the” graph is semi-Eulerian, with no indication that the graph may non-Eulerian (even if simple and connected) or Eulerian.
In summary, the Marking Key makes not a single mention of graphs being simple or connected, nor what can happen if they are not. If the writers of the Key were properly aware of these issues they have given no such indication. The Key merely confirms and compounds the errors in the Exam.
Question 3 is also addressed, absurdly, in the non-public Examination Report. The Report notes that Question 3(a) failed to explicitly state “the” graph was assumed to be connected, but that “candidates made this assumption [but not the assumption of simplicity?]; particularly as they were required to determine a Hamiltonian cycle for the graph in part (iii)”. That’s it.
Well, yes, it’s obviously the students’ responsibility to look ahead at later parts of a question to determine what they should assume in earlier parts. Moreover, if they do so, they may, unlike the examiners, make proper and sufficient assumptions. Moreover, they may observe that no such assumptions are sufficient for the final part of the question.
Of course what almost certainly happened is that the students constructed the simplest graph they could, which in the vast majority of cases would have been simple and connected and Hamiltonian. But we simply cannot tell how many students were puzzled, or for how long, or whether they had to start from scratch after drawing a “wrong” graph.
In any case, the presumed fact that most (but not all) students were unaffected does not alter the other facts: that the examiners bollocksed the question; that they then bollocksed the Marking Key; that they then bollocksed the explanation of both. And, that SCSA‘s disingenuous and incompetent ass-covering is conveniently hidden from public view.
The SCSA is not the most dishonest or inept educational authority in Australia, and their Applications Exam is not the worst of 2017. But one has to hand it to them, they’ve given it the old college try.
Following our discussion with Charlie, we sent a short but strong letter to WA’s School Curriculum Standards Authority, criticising one specific question and suggesting our (and some others’) general concerns. Their polite fobbing off indicated that our comments regarding the particular question “will be looked into”. Generally on the exam, they responded: “Feedback from teachers and candidates indicates the examination was well received and that the examination was fair, valid and based on the syllabus.” The reader can make of that what they will.
Determine the errors, ambiguities and sillinesses in the 2017 WA Applications Exam, Part 1 and Part 2. (Here, also, is the Summary Exam Report. Unfortunately, and ridiculously, the full report and the grading scheme are not made public, and so cannot be part of the competition.)
Post any identified issues in the comments below (anonymously, if you wish). You may post more than once, particularly on different questions, but please don’t edit on the run with post updates and comments to your own posts. You may (politely) comment on and seek to clarify others’ comments.
This post will be updated below, as the issues (or lack thereof) with particular questions are sorted out.
Entry is of course free (though you could always donate to Tenderfeet).
First prize, a signed copy of A Dingo Ate My Math Book, goes to the person who makes the most original and most valuable contributions.
Consolation prizes of Burkard’s QED will be awarded as deemed appropriate.
Rushed and self-appended contributions will be marked down!
This is obviously subjective as all Hell, and Marty’s decision will be final.
Charlie, Paul, Burkard, Anthony, Joseph, David and other fellow travellers are ineligible to enter.
Employees of SCSA are eligible to enter, since there’s no indication they have any chance of winning.
All correspondence will be entered into.
Well that worked well. Congratulations to Number 8, who wins by default. Details are here. We’ll attempt another competition, of hopefully broader interest, in the near future.
There’s good reason to be unhappy with the low percentage of female mathematics students, particularly at advanced levels. So, Oxford’s decision is in response to a genuine issue and is undoubtedly well-intentioned. Their decision, however also appears to be dumb, and it smells of dishonesty.
There are many suggestions as to why women are underrepresented in mathematics, and there’s plenty of room for thoughtful disagreement. (Of course there is also no shortage of pseudoscientific clowns and feminist nitwits.) Unfortunately, Oxford’s decision appears to be more in the nature of statistical manipulation than meaningful change.
Without more information, and the University has not been particularly forthcoming, it is difficult to know the effects of this decision. Reportedly, the percentage of female first class mathematics degrees awarded by Oxford increased from 21% in 2016 to 39% last year, while male firsts increased marginally to 47%. Oxford is presumably pleased, but without detailed information about score distributions and grade cut-offs it is impossible to understand what is underlying those percentages. Even if otherwise justified, however, Oxford’s decision constitutes deliberate grade inflation, and correspondingly its first class degree has been devalued.
The reported defences of Oxford’s decision tend only to undermine the decision. It seems that when the change was instituted last (Northern) summer, Oxford provided no rationale to the public. It was only last month, after The Times gained access to University documents under FOI, that the true reasons became known publicly. It’s a great way to sell a policy, of course, to be legally hounded into exposing your reasons.
Sarah Hart, a mathematician at the University of London, is quoted by The Times in support of longer exams: “Male students were quicker to answer questions, she said, but were more likely to get the answer wrong”. And, um, so we conclude what, exactly?
John Banzhaf, a prominent public interest lawyer, is reported as doubting Oxford’s decision could be regarded as “sexist”, since the extension of time was identical for male and female candidates. This is hilariously legalistic from such a politically wise fellow (who has some genuine mathematical nous).
The world is full of policies consciously designed to hurt one group or help another, and many of these policies are poorly camouflaged by fatuous “treating all people equally” nonsense. Any such policy can be good or bad, and well-intentioned or otherwise, but such crude attempts at camouflage are never honest or smart. The stated purpose of Oxford’s policy is to disproportionally assist female candidates; there are arguments for Oxford’s decision and one need not agree with the pejorative connotations of the word, but the policy is blatantly sexist.
Finally, there is the fundamental question of whether extending the exams makes them better exams. There is no way that someone unfamiliar with the exams and the students can know for sure, but there’s reasons to be sceptical. It is in the nature of most exams that there is time pressure. That’s not perfect, and there are very good arguments for other forms of assessment in mathematics. But all assessment forms are artificial and/or problematic in some significant way. And an exam is an exam. Presumably the maths exams were previously 90 minutes for some reason, and in the public debate no one has provided any proper consideration or critique of any such reasons.
The Times quotes Oxford’s internal document in support of the policy: “It is thought that this [change in exam length] might mitigate the . . . gender gap that has arisen in recent years, and in any case the exam should be a demonstration of mathematical understanding and not a time trial.”
This quote pretty much settles the question. No one has ever followed “and in any case” with a sincere argument.
A rectangle has an area of . What are the lengths of the sides of the rectangle in terms of .
Obviously, the expectation was for the students to declare the side lengths to be the linear factors x – 4 and x + 9, and just as obviously this is mathematical crap. (Just to hammer the point, set x = 5, giving an area of 14, and think about what the side lengths “must” be.)
One might hope that, having inflicted this mathematical garbage on a nation of students, the New Zealand Qualifications Authority would have been gently slapped around by a mathematician or two, and that the error would not be repeated. One might hope this, but, in these idiot times, it would be very foolish to expect it.
A few weeks ago, New Zealand maths education was in the news (again). There was lots of whining about “disastrous” exams, with “impossible” questions, culminating in a pompous petition, and ministerial strutting and general hand-wringing. Most of the complaints, however, appear to be pretty trivial; sure, the exams were clunky in certain ways, but nothing that we could find was overly awful, and nothing that warranted the subsequent calls for blood.
What makes this recent whining so funny is the comparison with the deafening silence in September. That’s when the 2017 Level 1 Algebra Exams appeared, containing the exact same rectangle crap as in 2016 (Question 3(a)(i) and Question 2(a)(i)). And, as in 2016, there is no evidence that anyone in New Zealand had the slightest concern.
People like to make fun of all the sheep in New Zealand, but there’s many more sheep there than anyone suspects.
Unfortunately, the technique presented in the three Examiners’ Reports for solving equation (1) is fundamentally wrong. (The Reports are here, here and here.) In synch with this wrongness, the standard textbook considers four misleading examples, and its treatment of the examples is infused with wrongness (Chapter 1F). It’s a safe bet that the forthcoming Report on the 2017 Methods Exam 2 will be plenty wrong.
What is the promoted technique? It is to ignore the difficult equation above, and to solve instead the presumably simpler equation
or perhaps the equation
Which is wrong.
It is simply not valid to assume that either equation (2) or (2)’ is equivalent to (1). Yes, as long as the inverse of f exists then equation (2)’ is equivalent to equation (2): a solution x to (2)’ will also be a solution to (2), and vice versa. And, yes, then any solution to (2) and (2)’ will also be a solution to (1). The converse, however, is in general false: a solution to (1) need not be a solution to (2) or (2)’.
It is easy to come up with functions illustrating this, or think about the graph above, or look here.
OK, the VCAA might argue that the exams (and, except for a couple of up-in-the-attic exercises, the textbook) are always concerned with functions for which solving (2) or (2)’ happens to suffice, so what’s the problem? The problem is that this argument would be idiotic.
Suppose that we taught students that roots of polynomials are always integers, instructed the students to only check for integer solutions, and then carefully arranged for the students to only encounter polynomials with integer solutions. Clearly, that would be mathematical and pedagogical crap. The treatment of equation (1) in Methods exams, and the close to universal treatment in Methods more generally, is identical.
OK, the VCAA might continue to argue that the students have their (stupifying) CAS machines at hand, and that the graphs of the particular functions under consideration make clear that solving (2) or (2)’ suffices. There would then be three responses:
(i) No one tests whether Methods students do anything like a graphical check, or anything whatsoever.
(ii) Hardly any Methods students do do anything. The overwhelming majority of students treat equations (1), (2) and (2)’ as automatically equivalent, and they have been given explicit license by the Examiners’ Reports to do so. Teachers know this and the VCAA knows this, and any claim otherwise is a blatant lie. And, for any reader still in doubt about what Methods students actually do, here’s a thought experiment: imagine the 2018 Methods exam requires students to solve equation (1) for the function f(x) = (x-2)/(x-1), and then imagine the consequences.
(iii) Even if students were implicitly or explicitly arguing from CAS graphics, “Look at the picture” is an absurdly impoverished way to think about or to teach mathematics, or pretty much anything. The power of mathematics is to be able take the intuition and to either demonstrate what appears to be true, or demonstrate that the intuition is misleading. Wise people are wary of the treachery of images; the VCAA, alas, promotes it.
The real irony and idiocy of this situation is that, with natural conditions on the function f, equation (1) is equivalent to equations (2) and (2)’, and that it is well within reach of Methods students to prove this. If, for example, f is a strictly increasing function then it can readily be proved that the three equations are equivalent. Working through and applying such results would make for excellent lessons and excellent exam questions.
Instead, what we have is crap. Every year, year after year, thousands of Methods students are being taught and are being tested on mathematical crap.
This is not a great start, since it’s a little peculiar using the logistic equation to model an area proportion, rather than a population or a population density. It’s also worth noting that the strict inequalities on P are unnecessary and rule out of consideration the equilibrium (constant) solutionsP = 0 and P = 1.
Clunky framing aside, part (a) of Question 3 is pretty standard, requiring the solving of the above (separable) differential equation with initial condition P(0) = 1/2. So, a decent integration problem trivialised by the presence of the stupifying CAS machine. After which things go seriously off the rails.
The setting for part (b) of the question has a toxin added to the petri dish at time t = 1, with the bacterial growth then modelled by the equation
Well, probably not. The effect of toxins is most simply modelled as depending linearly on P, and there seems to be no argument for the square root. Still, this kind of fantasy modelling is par for the VCAA‘s crazy course. Then, however, comes Question 3(b):
Find the limiting value of P, which is the maximum possible proportion of the Petri dish that can now be covered by the bacteria.
The question is a mess. And it’s wrong.
The Examiners’ “Report” (which is not a report at all, but merely a list of short answers) fails to indicate what students did or how well they did on this short, 2-mark question. Presumably the intent was for students to find the limit of P by finding the maximal equilibrium solution of the differential equation. So, setting dP/dt = 0 implies that the right hand side of the differential equation is also 0. The resulting equation is not particularly nice, a quartic equation for Q = √P. Just more silly CAS stuff, then, giving the largest solution P = 0.894 to the requested three decimal places.
In principle, applying that approach here is fine. There are, however, two major problems.
The first problem is with the wording of the question: “maximum possible proportion” simply does not mean maximal equilibrium solution, nor much of anything. The maximum possible proportion covered by the bacteria is P = 1. Alternatively, if we follow the examiners and needlessly exclude P = 1 from consideration, then there is no maximum possible proportion, and P can just be arbitrarily close to 1. Either way, a large initial P will decay down to the maximal equilibrium solution.
One might argue that the examiners had in mind a continuation of part (a), so that the proportion P begins below the equilibrium value and then rises towards it. That wouldn’t rescue the wording, however. The equilibrium solution is still not a maximum, since the equilibrium value is never actually attained. The expression the examiners are missing, and possibly may even have heard of, is least upper bound. That expression is too sophisticated to be used on a school exam, but whose problem is that? It’s the examiners who painted themselves into a corner.
The second issue is that it is not at all obvious – indeed it can easily fail to be true – that the maximal equilibrium solution for P will also be the limiting value of P. The garbled information within question (b) is instructing students to simply assume this. Well, ok, it’s their question. But why go to such lengths to impose a dubious and impossible-to-word assumption, rather than simply asking directly for an equilibrium solution?
To clarify the issues here, and to show why the examiners were pretty much doomed to make a mess of things, consider the following differential equation:
By setting Q = √P, for example, it is easy to show that the equilibrium solutions are P = 0 and P = 1/4. Moreover, by considering the sign of dP/dt for P above and below the equilibrium P = 1/4, it is easy to obtain a qualitative sense of the general solutions to the differential equation:
In particular, it is easy to see that the constant solution P = 1/4 is a semi-stableequilibrium: if P(0) is slightly below 1/4 then P(t) will decay to the stable equilibrium P = 0.
This type of analysis, which can readily be performed on the toxin equation above, is simple, natural and powerful. And, it seems, non-existent in Specialist Mathematics. The curriculum contains nothing that suggests or promotes any such analysis, nor even a mention of equilibrium solutions. The same holds for the standard textbook, in which for, for example, the equation for Newton’s law of cooling is solved (clumsily), but there’s not a word of insight into the solutions.
And this explains why the examiners were doomed to fail. Yes, they almost stumbled into writing a good, mathematically rich exam question. The paper thin curriculum, however, wouldn’t permit it.
Which one of the following statistics can never be negative?
A. the maximum value in a data set
B. the value of a Pearson correlation coefficient
C. the value of a moving mean in a smoothed time series
D. the value of a seasonal index
E. the value of a slope of a least squares line fitted to a scatterplot
Before we get started, a quick word on the question’s repeated use of the redundant “the value of”.
Now, on with answering the question.
It is pretty obvious that the statistics in A, B, C and E can all be negative, so presumably the intended answer is D. However, D is also wrong: a seasonal index can also be negative. Unfortunately the explanation of “seasonal index” in the standard textbook is lost in a jungle of non-explanation, so to illustrate we’ll work through a very simple example.
Suppose a company’s profits and losses over the four quarters of a year are as follows:
So, the total profit over the year is $8,000, and then the average quarterly profit is $2000. The seasonal index (SI)for each quarter is then that quarter’s profit (or loss) divided by the average quarterly profit:
Clearly this example is general, in the sense that in any scenario where the seasonal data are both positive and negative, some of the seasonal indices will be negative. So, the exam question is not merely technically wrong, with a contrived example raising issues: the question is wrong wrong.
Now, to be fair, this time the VCAA has a defense. It appears to be more common to apply seasonal indices in contexts where all the data are one sign, or to use absolute values to then consider magnitudes of deviations. It also appears that most or all examples Further students would have studied included only positive data.
So, yes, the VCAA (and the Australian Curriculum) don’t bother to clarify the definition or permitted contexts for seasonal indices. And yes, the definition in the standard textbook implicitly permits negative seasonal indices. And yes, by this definition the exam question is plain wrong. But, hopefully most students weren’t paying sufficient attention to realise that the VCAA weren’t paying sufficient attention, and so all is ok.
Well, the defense is something like that. The VCAA can work on the wording.
The first question in the matrix module of Further Mathematics’ Exam 2 is concerned with a school canteen selling pies, rolls and sandwiches over three separate weeks. The number of items sold is set up as a 3 x 3 matrix, one row for each week and one column for each food choice. The last part, (c)(ii), of the question then reads:
The matrix equation below shows that the total value of all rolls and sandwiches sold in these three weeks is $915.60
Matrix L in this equation is of order 1 x 3.
Write down matrix L.
This 1-mark question is presumably meant to be a gimme, with answer L = [0 1 1]. Unfortunately the question is both weird and wrong. (And lacking in punctuation. Guys, it’s not that hard.) The wrongness comes from the examiners having confused their rows and columns. As is made clear in the the previous part, (c)(i), of the question, the 3 x 1 matrix of numbers indicates the total earnings from each of the three weeks, not from each of the three food choices. So, the equation indicates the total value of all products sold in weeks 2 and 3.
There’s not much to say about such an obvious error. It is very easy to confuse rows and columns, and we’ve all done it on occasion, but if VCAA’s vetting cannot catch this kind of mistake then it cannot be relied upon to catch anything. The only question is how the Examiners’ Report will eventually address the error. The VCAA is well-practised in cowardly silence and weasel-wording, but it would be exceptionally Trumplike to attempt such tactics here.
Error aside, the question is artificial, and it is not clear that the matrix equation “shows” much of anything. Yes, 0-1 and on-or-off matrices are important and useful, but the use of such a matrix in this context is contrived and confusing. Not a hanging offence, and benign by VCAA’s standards, but the question is pretty silly. And, not forgetting, wrong.
Yes, we’ve used that title before, but it’s a damn good title. And there is so much madness in Mathematical Methods to cover. And not only Methods. Victoria’s VCE exams are coming to an end, the maths exams are done, and there is all manner of new and astonishing nonsense to consider. This year, the Victorian Curriculum and Assessment Authority have outdone themselves.
Over the next week we’ll put up a series of posts on significant errors in the 2017 Methods, Specialist Maths and Further Maths exams, including in the mid-year Northern Hemisphere exams. By “significant error” we mean more than just a pointless exercise in button-pushing, or tone-deaf wording, or idiotic pseudomodelling, or aimless pedantry, all of which is endemic in VCE maths exams. A “significant error” in an exam question refers to a fundamental mathematical flaw with the phrasing, or with the intended answer, or with the (presumed or stated) method that students were supposed to use. Not all the errors that we shall discuss are large, but they are all definite errors, they are errors that would have (or at least should have) misled some students, and none of these errors should have occurred. (It is courtesy of diligent (and very annoyed) maths teachers that I learned of most of these questions.) Once we’ve documented the errors, we’ll post on the reasons that the errors are so prevalent, on the pedagogical and administrative climate that permits and encourages them.
Our first post concerns Exam 1 of Mathematical Methods. In the final question, Question 9, students consider the function on the closed interval [0,1], pictured below. In part (b), students are required to show that, on the open interval (0,1), “the gradient of the tangent to the graph of f” is . A clumsy combination of calculation and interpretation, but ok. The problem comes when students then have to consider tangents to the graph.
In part (c), students take the angle θ in the picture to be 45 degrees. The pictured tangents then have slopes 1 and -1, and the students are required to find the equations of these two tangents. And therein lies the problem: it turns out that the “derivative” of f is equal to -1 at the endpoint x = 1. However, though the natural domain of the function is [0,∞), the students are explicitly told that the domain of f is [0,1].
This is obvious and unmitigated madness.
Before we hammer the madness, however, let’s clarify the underlying mathematics.
Does the derivative/tangent of a suitably nice function exist at an endpoint? It depends upon who you ask. If the “derivative” is to exist then the standard “first principles” definition must be modified to be a one-sided limit. So, for our function f above, we would define
This is clearly not too difficult to do, and with this definition we find that f'(1) = -1, as implied by the Exam question. (Note that since f naturally extends to the right of x =1, the actual limit computation can be circumvented.) However, and this is the fundamental point, not everyone does this.
At the university level it is common, though far from universal, to permit differentiability at the endpoints. (The corresponding definition of continuity on a closed intervalis essentially universal, at least after first year.) At the school level, however, the waters are much muddier. The VCE curriculum and the most popular and most respected Methods textbook appear to be completely silent on the issue. (This textbook also totally garbles the related issue of derivatives of piecewise defined (“hybrid”) functions.) We suspect that the vast majority of Methods teachers are similarly silent, and that the minority of teachers who do raise the issue would not in general permit differentiability at an endpoint.
In summary, it is perfectly acceptable to permit derivatives/tangents to graphs at their endpoints, and it is perfectly acceptable to proscribe them. It is also perfectly acceptable, at least at the school level, to avoid the issue entirely, as is done in the VCE curriculum, by most teachers and, in particular, in part (b) of the Exam question above.
What is blatantly unacceptable is for the VCAA examiners to spring a completely gratuitous endpoint derivative on students when the issue has never been raised. And what is pure and unadulterated madness is to spring an endpoint derivative after carefully and explicitly avoiding it on the immediately previous part of the question.
The Victorian Curriculum and Assessment Authority has a long tradition of scoring own goals. The question above, however, is spectacular. Here, the VCAA is like a goalkeeper grasping the ball firmly in both hands, taking careful aim, and flinging the ball into his own net.
In Q9(b), students were asked to show that the derivative of is . as we noted, the question was pointlessly verbose in classic VCAA style, but no big deal; an easy 1-mark question. What could go wrong?
Well, what went wrong is that 2/3 of students scored 0/1 on this very easy question. How? The Examination Report explains:
When answering ‘show that’ questions, students should include all steps to demonstrate exactly what was done, but many students often left steps out. A common pattern was to go straight from the first line of differentiation immediately to the final line, with no indication of obtaining a common denominator.
For fuck’s sake.
The stark incompetence of VCAA is often stunning. And, the nasty, meaningless pedantry of the VCAA is often stunning. But, on a question like this, when you see the two in seamless combination, that’s when you realise that you’re in the presence of true greatness.