The Descent of Man

In 1973, the BBC televised The Ascent of Man, the brilliant series by Jacob Bronowski on the development of science and society. In his final episode, The Long Childhood, Bronowski sums up what he regards as special to being human, and the essence of a healthy scientific society:

If we are anything, we must be a democracy of the intellect. We must not perish by the distance between people and government, people and power, by which Babylon, and Egypt, and Rome failed. And that distance can only be … conflated, can only be closed, if knowledge sits here, and not up there.

That seems a hard lesson. After all, this is a world run by specialists. Isn’t that what we mean by a scientific society? No, it isn’t. A scientific society is one in which specialists can indeed do the things like making the electric light work. But it’s you, it’s I, who have to know how nature works, how electricity is one of her expressions, in the light, and in my brain.

And we are really here on a wonderful threshold of knowledge. The ascent of man is always teetering in the balance. There’s always a sense of uncertainty as to whether, when man lifts his foot for the next step, it’s really going to come down ahead. And what is ahead of us? At last, the bringing together of all that we’ve learnt in physics and in biology, towards an understanding of where we have come, what man is.

Knowledge is not a loose-leaf notebook of facts. Above all, it is a responsibility for the integrity of what we are, above all, of what we are as ethical creatures. You can’t possibly maintain that if you let other people run the world for you, while you yourself continue to live … out of a ragbag of morals that come from past beliefs. That’s really crucial today. You see, it’s pointless to advise people to learn differential equations, “You must do a course in electronics or in computer programming.” Of course not. And yet, fifty years from now, if an understanding of man’s origins, his evolution, his history, his progress, is not the commonplace of the schoolbooks, we shall not exist.

Bronowski spoke those words forty-seven years ago. Three more years.

WitCH 38: A Deep Hole

This one is due to commenter P.N., who raised it on another post, and the glaring issue has been discussed there. Still, for the record it should be WitCHed, and we’ve also decided to expand the WitCHiness slightly (and could have expanded it further).

The following questions appeared on 2019 Specialist Mathematics NHT, Exam 2 (CAS). The questions are followed by sample Mathematica solutions (screenshot corrected, to include final comment) provided by VCAA (presumably in the main for VCE students doing the Mathematica version of Methods). The examination report provides answers, identical to those in the Mathematica solutions, but indicates nothing further.

UPDATE (05/07/20)

The obvious problem here, of course, is that the answer for Part (b), in both the examination report and VCAA’s Mathematica solutions, is flat out wrong: the function fk will also fail to have a stationary point if k = -2 or k = 0. Nearly as bad, and plenty bad, the method in VCAA’s Mathematica solutions to Part (c) is fundamentally incomplete: for a (twice-differentiable) function f to have an inflection point at some a, it is necessary but not sufficient to have f’’(a) = 0.

That’s all pretty awful, but we believe there is worse here. The question is, how did the VCAA get it wrong? Errors can always occur, but why specifically did the error in Part (b) occur, and why, for a year and counting, wasn’t it caught? Why was a half-method suggested for Part (c), and why was this half-method presumably considered reasonable strategy for the exam? Partly, the explanation can go down to this being a question from NHT, about which, as far as we can tell, no one really gives a stuff. This VCAA screw-up, however, points to a deeper, systemic and much more important issue.

The first thing to note is that Mathematica got it wrong: the Solve function did not return the solution to the equation fk‘ = 0. What does that imply for using Mathematica and other CAS software? It implies the user should be aware that the machine is not necessarily doing what the user might reasonably think it is doing. Which is a very, very stupid property of a black box: if Solve doesn’t mean “solve”, then what the hell does it mean? Now, as it happens, Mathematica’s/VCAA’s screw-up could have been avoided by using the function Reduce instead of Solve.* That would have saved VCAA’s solutions from being wrong, but not from being garbage.

Ask yourself, what is missing from VCAA’s solutions? Yes, yes, correct answers, but what else? This is it: there are no functions. There are no equations. There is nothing, nothing at all but an unreliable black box. Here we have a question about the derivatives of a function, but nowhere are those derivatives computed, displayed or contemplated in even the smallest sense.

For the NHT problem above, the massive elephant not in the room is an expression for the derivative function:

    \[\color{red} \boldsymbol{f'_k(x) = -\frac{x^2 + 2(k+1)x +1}{(x^2-1)^2}}\]

What do you see? Yep, if your algebraic sense hasn’t been totally destroyed by CAS, you see immediately that the values k = 0 and k = -2 are special, and that special behaviour is likely to occur. You’re aware of the function, alert to its properties, and you’re led back to the simplification of fk for these special values. Then, either way or both, you are much, much less likely to screw up in the way the VCAA did.

And that always happens. A mathematician always gets a sense of solutions not just from the solution values, but also from the structure of the equations being solved. And all of this is invisible, is impossible, all of it is obliterated by VCAA’s nuclear weapon approach.

And that is insane. To expect, to effectively demand that students “solve” equations without ever seeing those equations, without an iota of concern for what the equations look like, what the equations might tell us, is mathematical and pedagogical insanity.

 

*) Thanks to our ex-student and friend and colleague Sai for explaining some of Mathematica’s subtleties. Readers will be learning more about Sai in the very near future.

MitPY 7: Diophantine Teen Fans

This MitPY is a request from frequent commenter, Red Five:

I’d like to ask what others think of teaching (mostly linear) Diophantine equations in early secondary school. They are nowhere in the curriculum but seem to be everywhere in competitions, including the AMC junior papers on occasion. I don’t see any reason to not teach them (even as an extension idea) but others may have some insights into why it won’t work.

Nuclear Fishin’

H. R. Currie and G. M. Currie, Open Access Journal

This one was brought to our attention by the Evil Mathologre. It is a tricky one, since it involves the work of a school student, and the student is in no way a target for our criticism. Out of such concerns, we haven’t made this post a WitCH; it should be considered in the same vein as this Maths Masters column.

As reported in Wagga’s Daily Advertiser a couple weeks ago, and as picked up by The Canberra Times, IB student Hugo Currie was given a “mathematics assignment” (presumably an Internal Assessment) on the golden ratio:

“… we had to investigate an element of the golden ratio in the built or natural environment so I decided to look at atomic structure …“.

Hugo considered the atomic mass number A (protons plus neutrons) of nuclides (isotopes), comparing A to the number N of neutrons and the number Z of protons. Of course, A = N + Z. Hugo then looked for “fibonacci nuclides”, nuclides for which the ratios A/N and N/Z are very good approximations to the golden ratio. He found a bunch, and suggested his results as a guide to hunting for new elements and nuclides. Hugo’s graphic above is a good illustrative summary of his investigation; the horizontal axis is N, the vertical axis is Z, and the black line indicates known stable nuclides.

OK, no big deal. From our perspective, having a class sent off to hunt for the golden ratio is asking for trouble, but it’s just an IA, and Hugo’s work seems interestingly exploratory-ish, in the manner the IB foolishly demands. But why did Hugo make the news, and what’s the problem?

In May, Hugo published a paper, co-authored with his father Professor Geoffrey Currie, in the peer-reviewed Open Science Journal. And, yes, of course that made the news. And yes, that’s the problem.

Unsurprisingly but unfortunately, we can see little if anything research-worthy in the Curries’ paper, and we noticed a number of “Uh-oh”s. A fine IA, sure, but not a research paper, and not news.

We’ll leave it at that. Readers are free to hunt for the uh-ohs.

MAV’s Sense and Censor Ability

We’ve written about MAV’s censorship previously. It seems, unfortunately, that we may have another such incident to write about in the near future. We’ll see.

There is also a third incident that we’ve long planned to write about, but have never gotten around to. It is rather involved, and we won’t give the full story here, but one specific aspect is perhaps worth telling now.

In 2016, we accepted an invitation from the MAV to give a keynote address at their Annual Conference. We chose as our keynote title Same Sermon, New Jokes. We also submitted a “bio pic” – the graphic above – and an abstract. The abstract indicated our contempt for twenty or so organisations and facets of Australian mathematics education.

A couple months later, the Conference organisers emailed to indicate their objection to our abstract. One can argue the merits of and the propriety of this objection, and we will write generally on this at a later date, but one aspect of the objection was particularly notable. The email included the following:

“While we welcome all points of view, we do need to be respectful of the organisations we work with, and with whom we need to maintain good relations … We would like you to re visit the text … without the criticism of formal organisations.”

We pushed back against the criticism, and ended our reply with what we intended as a rhetorical question:

“You wrote that you (plural) welcome all points of view, which I was very reassured to read. Given that, which formal organisations do you consider to be above criticism?”

The email reply from the organisers included a response:

“In regards to the formal organisations with which the MAV has relations, you have stated some of them, e.g. ACARA, VCAA.”

No one at the MAV, including the then President, indicated to us any problem with this request or its clarification.

For now, we’ll leave it there.

WitCH 37: A Foolproof Argument

We’re amazed we didn’t know about this one, which was brought to our attention by commenter P.N.. It comes from the 2013 Specialist Mathematics Exam 2: The sole comment on this question in the Examination Report is:

“All students were awarded [the] mark for this question.”

Yep, the question is plain stuffed. We think, however, there is more here than the simple wrongness, which is why we’ve made it a WitCH rather than a PoSWW. Happy hunting.

UPDATE (11/05) Steve C’s comment below has inspired an addition:

Update (20/05/20)

The third greatest issue with the exam question is that it is wrong: none of the available answers is correct. The second greatest issue is that the wrongness is obvious: if z^3 lies in a sector then the natural guess is that z will lie in one of three equally spaced sectors of a third the width, so God knows why the alarm bells weren’t ringing. The greatest issue is that VCAA didn’t have the guts or the basic integrity to fess up: not a single word of responsibility or remorse. Assholes.

Those are the elephants stomping through the room but, as commenters as have noted, there is plenty more awfulness in this question:

  • “Letting” z = a + bi is sloppy, confusing and pointless;
  • The term “quadrant” is undefined;
  • The use of “principal” is unnecessary;
  • “argument” is better thought as the measure of an angle not the angle itself;
  • Given z is a single complex number, “the complete set of values for Arg(z)” will consist of a single number.
  • The grammar isn’t.

SACs of Shit

SACs may not be the greatest problem with VCE mathematics, but they’re right up there. SACs are torture for teachers and torture for students. They teach nothing. As assessment, they are unnecessary, unreliable and phenomenally inefficient. They are a license for VCAA’s unaswerable auditors applying Kafkaesque rules to act either as favour-givers or as little Hitlers, as the mood takes them. These problems are currently amplified to eleven by VCAA’s “We’ll give you some kind of guidance in, oh, a little while” plan for the plague year.

For all of the awfulness of the above, that’s not the worst of it. The worst is that the majority of SACs are monumentally stupid. Literally. A SAC has the imposing presence of a monument, its towering stupidity casting a shadow over everything.

How are SACs so bad? Many contain errors, often subtle although too often not, but, as irritating as that is, that is not the main problem. The main problem is that they are mathematical nonsense. Typically they will present the student with a ridiculous model of a contrived problem, which is then all redone in greater, brain-bludgeoning generality by throwing in a needless parameter in a randomly chosen location. All of this is undertaken, of course, in the nihilistic world of CAS. Finally, somewhere near the end, the poor beleaguered student, who by this stage just wants to escape with their life, will be required to “comment on the model”, to which the usual response is “It’s really nice, please let me go” and to which the only reasonable response is “It’s fucking insane”.

How do we know SACs are this bad? Because we see them. We see the commercial SACs, and the sample SACs, and the past SACs, and the current SACs. Are they all as bad as we suggest? No, of course not. Specialist SACs are typically nowhere near as bad as Methods SACs, and even many Methods SACs will fall short of truly idiotic torture, rising only to the level of being dumb and painful. Then there are the rare few SACs we see that are good, resulting in an exchange:

“This actually makes sense. Who’s your teacher?”

“Oh, it’s Mr. ….”

“Ah. Yes.”

So, yes, the quality and worth of SACs varies widely, but the average is squarely in the neighbourhood of monumental, tortuous stupidity. Which bring us to the “why”. Why are SACs in general so awful? There are two reasons.

The first and fundamental reason is the VCAA and their view of what they imagine is a curriculum. VCE mathematics subjects are so shallow and so lacking in a foundation of solid reason, that almost any attempt at depth and substance in a SAC is destined to be farce. The VCAA has replaced foundation and depth with CAS, which reaches peak awfulness in SACs. The VCAA promotes the fantasy that CAS magically transforms students into mathematical explorers, clever little Lewises and Clarks skilfully navigating the conceptual wilderness. The reality, of course, is much less Lewis and Clark than it is Burke and Wills. To top it off, SACs must follow guidelines that Terry Gilliam would be proud of, giving us Burke and Wills’ Bogus Brazilian Journey. Or, just Eraserhead. Something like that.

The second reason is the teachers. Sort of. Even if the subjects were coherent, even if they were unpoisoned by CAS and were unconstrained by vague and ridiculous conditions, even then writing a good SAC would be a very difficult and massively time-consuming task. Most teachers just don’t have the mathematical background, or the literary skill, to write a coherent, correct and mathematically rich SAC; many cannot even recognise one. And, that’s writing a good SAC for this imaginary good subject; writing a good SAC for these fundamentally flawed subjects with their ridiculous constraints is close to impossible, even for a strong teacher. And which teachers, particularly weaker teachers, have the time to compose such a good SAC? Why bother trying? And so, with the greatest common sense, most teachers do not. Most teachers stick to the audit-proof and meaningless formulaic SAC bullshit that the VCAA expects and effectively demands.

The VCAA’s SAC system is a crime against mathematical humanity.

UPDATE (15/5)

We received the following from a student acquaintance (who hadn’t read this post):

Hi Marty, given the upcoming math SACs approaching soon, the pressure is on to practice and practice. Attached below is last year’s Methods SAC1 (Unit 3/4) for [the student’s school]. I remember many talented friends of mine who were stumped, and didn’t do very well on this SAC. Personally, I thought this SAC was horrifying. In contrast to Specialist, (I actually quite enjoy Specialist!), Methods seems to be a huge prick because of frustrating, ambiguous SACs containing questions seemingly cooked from the pits of hell itself. Are these sort of SACs common across the state?

The student is, of course, correct. The SAC, which comes from a highly respected school, is a nightmare in all of the ways canvassed above. From start to end it is idiotic CAS-driven pseudo-modelling, complete with Magritte nonsense and a pointlessly prissy grading scheme. And, yes, the SAC contains an error.

Of course we won’t reveal the school, much less any teachers involved, which means that we are also unable to critique the SAC in detail. But that is one of the insidious aspects of the SAC system; an entirely proper concern for privacy means that SAC nonsense, although endemic, fails to be exposed to the public critique that is so very much needed.