Eddie Versus the Forces of Woo

No one appears to have a bad word for Eddie Woo. And no, we’re not looking to thump Eddie here; the mathematics videos on Eddie’s WooTube channel are engaging and clear and correct, and his being honoured as Local Australian of the Year and as a Top Ten Teacher is really cool. We do, however, want to comment on Eddie’s celebrity status and what it means.

What do Eddie’s videos exhibit? Simply, Eddie is shown teaching. He is explaining mathematics on a plain old whiteboard, with no gizmos, no techno demos, no classroom flipping, rarely a calculator, none of the familiar crap. There’s nothing at all, except a class of engaged students learning from a knowledgeable and engaging teacher.

Eddie’s classroom is not the slightest bit revolutionary. Indeed, it’s best described as reactionary. Eddie is simply doing what good maths teachers do, and what the majority of maths teachers used to do before they were avalanched with woo, with garbage theories and technological snake oil.

Sure, Eddie tapes his lessons, but Eddie’s charmingly clunky videos are not in any way “changing the face of mathematics teaching“. Eddie’s videos are not examples of teaching, they are evidence of teaching. For actual instruction there are many better videos out there. More importantly, no video will ever compare to having a real-live Eddie to teach you.

There are many real-live Eddies out there, many teachers who know their maths and who are teaching it. And, there would be many, many more real-live Eddies if trainee teachers spent more time learning mathematics properly and much less time in the clutches of  Australia’s maths ed professors. That’s the real message of Eddie’s videos.

Chicken Shit

The ACCC has released guidance on the meaning of “free range eggs”, to come into force in April. There are a number of conditions for hens to be designated free range, but the clear mathematical requirement is that the chickens be subject to “a stocking density of 10,000 hens or less [sic] per hectare.” This compares to the maximum of 1500 hens per hectare recommended by the CSIRO. And by Choice. And by the Humane Society International. And by the RSPCA. And by pretty much everyone except Coles and other industry thugs.

The ACCC is just the messenger here, their guidance mirroring the Australian Consumer Law (Free Range Egg Labelling) Information Standard 2017, passed last April. The legislation was introduced by the Minister for Small Business, Michael McCormack. It was McCormack who took credit for the definition of stocking density:

 … my decision takes into consideration the views of consumers, advocacy groups and industry, and provides a sensible balance with a focus on informing consumers – so they can make the choice that’s right for their needs.

The reader can assess whether McCormack’s “consideration” has resulted in anything remotely resembling “sensible balance”, or in the ability of consumers to make an informed choice. Or, rather, whether Minister McCormack is simply another National Party asshole.

Downwardly Mobile

In response to France’s move to ban mobile phones from schools, now other countries are considering the same.

Well, sort of. Since 2010, France has already banned mobile phones from classrooms; what is controversial is the French proposal to ban mobiles from schools entirely. So, countries like England and Australia are only actively considering what France has accepted without question for years.

Of course, following the consideration to do the blindingly obvious, there is the backlash from the professionals. The ABC quotes NSW Secondary Principals’ Council president Chris Presland as saying

We talk about trying to stimulate STEM education in our schools … it seems quite bizarre that we’re talking about banning the most obvious forms of technology at our disposal. 

Dr Joanne Orlando, an “expert on children and technology” at UWS is also against any such ban. Responding to government comments, Dr. Orlando responds that

 it takes us a few years back from all the work we are doing in education and training … There are so many new ways that mobile devices can add to the classroom.

Thank God for experts.

The Oxford is Slow

Last year, Oxford University extended the length its mathematics exams from 90 to 105 minutes. Why? So that female students would perform better, relative to male students. According to the University, the problem with shorter exams is that “female candidates might be more likely to be adversely affected by time pressure”.

Hmm.

There’s good reason to be unhappy with the low percentage of female mathematics students, particularly at advanced levels. So, Oxford’s decision is in response to a genuine issue and is undoubtedly well-intentioned. Their decision, however also appears to be dumb, and it smells of dishonesty.

There are many suggestions as to why women are underrepresented in mathematics, and there’s plenty of room for thoughtful disagreement. (Of course there is also no shortage of pseudoscientific clowns and feminist nitwits.) Unfortunately, Oxford’s decision appears to be more in the nature of statistical manipulation than meaningful change.

Without more information, and the University has not been particularly forthcoming, it is difficult to know the effects of this decision. Reportedly, the percentage of female first class mathematics degrees awarded by Oxford increased from 21% in 2016 to 39% last year, while male firsts increased marginally to 47%. Oxford is presumably pleased, but without detailed information about score distributions and grade cut-offs it is impossible to understand what is underlying those percentages. Even if otherwise justified, however, Oxford’s decision constitutes deliberate grade inflation, and correspondingly its first class degree has been devalued.

The reported defences of Oxford’s decision tend only to undermine the decision. It seems that when the change was instituted last (Northern) summer, Oxford provided no rationale to the public. It was only last month, after The Times gained access to University documents under FOI, that the true reasons became known publicly. It’s a great way to sell a policy, of course, to be legally hounded into exposing your reasons.

Sarah Hart, a mathematician at the University of London, is quoted by The Times in support of longer exams: “Male students were quicker to answer questions, she said, but were more likely to get the answer wrong”. And, um, so we conclude what, exactly?

John Banzhaf, a prominent public interest lawyer, is reported as doubting Oxford’s decision could be regarded as “sexist”, since the extension of time was identical for male and female candidates. This is hilariously legalistic from such a politically wise fellow (who has some genuine mathematical nous).

The world is full of policies consciously designed to hurt one group or help another, and many of these policies are poorly camouflaged by fatuous “treating all people equally” nonsense. Any such policy can be good or bad, and well-intentioned or otherwise, but such crude attempts at camouflage are never honest or smart. The stated purpose of Oxford’s policy is to disproportionally assist female candidates; there are arguments for Oxford’s decision and one need not agree with the pejorative connotations of the word, but the policy is blatantly sexist.

Finally, there is the fundamental question of whether extending the exams makes them better exams. There is no way that someone unfamiliar with the exams and the students can know for sure, but there’s reasons to be sceptical. It is in the nature of most exams that there is time pressure. That’s not perfect, and there are very good arguments for other forms of assessment in mathematics. But all assessment forms are artificial and/or problematic in some significant way. And an exam is an exam. Presumably the maths exams were previously 90 minutes for some reason, and in the public debate no one has provided any proper consideration or critique of any such reasons.

The Times quotes Oxford’s internal document in support of the policy: “It is thought that this [change in exam length] might mitigate the . . . gender gap that has arisen in recent years, and in any case the exam should be a demonstration of mathematical understanding and not a time trial.” 

This quote pretty much settles the question. No one has ever followed “and in any case” with a sincere argument.

Smoke Gets in Your IQs

There’s not much more revolting than the tobacco industry. Well, OK, there’s racist scum like Trump and Turnbull. And there’s greasy media apologists. And Bill Gates. And Mia Farrow.*

Alright, the world is full of awful people. But you get the point: it is difficult to be on the side of smoking and tobacco-pushing sociopaths.

Difficult, but not impossible.

Recently, the media was full of shock and horror at a new study on smoking. It was widely reported that 2/3 of people who try one cigarette end up as “daily smokers”. This was the conclusion of a meta-analysis, covering over 200,000 respondents from eight surveys. Professor Peter Hajek, one of the study’s authors, noted the meta-analysis constituted documentation of the “remarkable hold that cigarettes can establish after a single experience.”

Which is crap, and obvious crap. The implied suggestion that a single cigarette can turn a person into a helpless addict is nothing but Reefer Madness madness.

How can a respected and sophisticated academic study come to such a conclusion? Well, it doesn’t.

Anyone who has read the great debunking by Susan Traynor‘s son knows to never take a statistical study, much less a one sentence summary of a study, at face value. In this case, and as the authors of the study properly and cautiously note, that “2/3 of people” hides a wide variance in survey quality, response rates and response types.

More fundamentally, and astonishingly, the study (paywalled) never attempts to clarify, much less define, the term “daily smoker”. How many days does that require? The appendix to the study suggests that only three of the eight surveys included in the meta-analysis asked about “daily” smoking with specific reference to a minimal time period, the periods being 30 days, “nearly every day” for two months, and six months.

Of these three studies, the 2013 US NSDUH survey, which used the 30-day period, had around 55,000 respondents and the highest response rate, of around 72%. Amongst those respondents, about 50% of those who had ever smoked had at some time been “daily smokers” (i.e. for 30 days). Hardly insignificant, nor an insignificant time period, but a significant step down from “2/3 daily smokers”. (For some reason, the figures quoted in the meta-analysis, though close, are not identical to the figures in the NSDUH survey; specifically the number of people answering “YES” to the questions “CIGEVER” and “CIGDLYMO” differ.)

Even accepting the meta-analysis as sufficiently accurate, so what? What does it actually indicate? Reasonably enough, the authors suggest that their study has implications for efforts to stop people becoming regular smokers. The authors are tentative, however, rightly leaving the policy analysis for another forum. In particular, in the study the authors never make any claim of the “remarkable hold” that a single cigarette can have, nor do they make any remotely similar claim.

The “remarkable hold” line, which was repeated verbatim in almost every news report, originates from a media release from Hajek’s university. Of course barely any media organisations bothered to look beyond the media release, or to think for half a second before copying and pasting.

There is indeed a remarkable hold here. It is the remarkable hold university media units have on news organisations, which don’t have the time or experience or basic nous to be properly skeptical of the over-egged omelettes routinely handed to them on a platter.

Update: Just a quick addition, for those might doubt that Turnbull is racist scum.

* Yeah, yeah, no one knows, except Mia and Woody. But I believe Moses.

Wenn Will We Ever Learn?

Another day, another person banging the STEM drum.

Today we’re supposedly learning about The case for making maths mandatory in high school.

Except we’re learning nothing of the sort.

What we are learning is that the author is incapable of composing a paragraph containing more than one sentence.

This is very annoying.

We are also learning nothing about mathematics education.

This is also very annoying.

We are learning, however, that a sequence of tendentious and unsupported and unconnected dot points makes for a boring and pointless newspaper column.

We are learning all this from Kim Wenn, the retiring CIO of Tabcorp.

This is perfect.

Polynomialy Perverse

What, with its stupid curriculastupid texts and really monumentally stupid exams, it’s difficult to imagine a wealthy Western country with worse mathematics education than Australia. Which is why God gave us New Zealand.

Earlier this year we wrote about the first question on New Zealand’s 2016 Level 1 algebra exam:

A rectangle has an area of  \bf x^2+5x-36. What are the lengths of the sides of the rectangle in terms of  \bf x.

Obviously, the expectation was for the students to declare the side lengths to be the linear factors x – 4 and x + 9, and just as obviously this is mathematical crap. (Just to hammer the point, set x = 5, giving an area of 14, and think about what the side lengths “must” be.)

One might hope that, having inflicted this mathematical garbage on a nation of students, the New Zealand Qualifications Authority would have been gently slapped around by a mathematician or two, and that the error would not be repeated. One might hope this, but, in these idiot times, it would be very foolish to expect it.

A few weeks ago, New Zealand maths education was in the news (again). There was lots of whining about “disastrous” exams, with “impossible” questions, culminating in a pompous petition, and ministerial strutting and general hand-wringing. Most of the complaints, however, appear to be pretty trivial; sure, the exams were clunky in certain ways, but nothing that we could find was overly awful, and nothing that warranted the subsequent calls for blood.

What makes this recent whining so funny is the comparison with the deafening silence in September. That’s when the 2017 Level 1 Algebra Exams appeared, containing the exact same rectangle crap as in 2016 (Question 3(a)(i) and Question 2(a)(i)). And, as in 2016, there is no evidence that anyone in New Zealand had the slightest concern.

People like to make fun of all the sheep in New Zealand, but there’s many more sheep there than anyone suspects.

UPDATE (04/02/19): An Oxford school text joins in the fun.

Fixations and Madness

Our sixth and final post on the 2017 VCE exam madness is on some recurring nonsense in Mathematical Methods. The post will be relatively brief, since a proper critique of every instance of the nonsense would be painfully long, and since we’ve said it all before.

The mathematical problem concerns, for a given function f, finding the solutions to the equation

    \[\boldsymbol{(1)\qquad\qquad f(x) \ = \ f^{-1}(x)\,.}\]

This problem appeared, in various contexts, on last month’s Exam 2 in 2017 (Section B, Questions 4(c) and 4(i)), on the Northern Hemisphere Exam 1 in 2017 (Questions 8(b) and 8(c)), on Exam 2 in 2011 (Section 2, Question 3(c)(ii)), and on Exam 2 in 2010 (Section 2, Question 1(a)(iii)).

Unfortunately, the technique presented in the three Examiners’ Reports for solving equation (1) is fundamentally wrong. (The Reports are here, here and here.) In synch with this wrongness, the standard textbook considers four misleading examples, and its treatment of the examples is infused with wrongness (Chapter 1F). It’s a safe bet that the forthcoming Report on the 2017 Methods Exam 2 will be plenty wrong.

What is the promoted technique? It is to ignore the difficult equation above, and to solve instead the presumably simpler equation

    \[ \boldsymbol{(2) \qquad\qquad  f(x) \ = \  x\,,}\]

or perhaps the equation

    \[\boldsymbol{(2)' \qquad\qquad f^{-1}(x)\ = \ x \,.}\]

Which is wrong.

It is simply not valid to assume that either equation (2) or (2)’ is equivalent to (1). Yes, as long as the inverse of f exists then equation (2)’ is equivalent to equation (2): a solution x to (2)’ will also be a solution to (2), and vice versa. And, yes, then any solution to (2) and (2)’ will also be a solution to (1). The converse, however, is in general false: a solution to (1) need not be a solution to (2) or (2)’.

It is easy to come up with functions illustrating this, or think about the graph above, or look here.

OK, the VCAA might argue that the exams (and, except for a couple of up-in-the-attic exercises, the textbook) are always concerned with functions for which solving (2) or (2)’ happens to suffice, so what’s the problem? The problem is that this argument would be idiotic.

Suppose that we taught students that roots of polynomials are always integers, instructed the students to only check for integer solutions, and then carefully arranged for the students to only encounter polynomials with integer solutions. Clearly, that would be mathematical and pedagogical crap. The treatment of equation (1) in Methods exams, and the close to universal treatment in Methods more generally, is identical.

OK, the VCAA might continue to argue that the students have their (stupifying) CAS machines at hand, and that the graphs of the particular functions under consideration make clear that solving (2) or (2)’ suffices. There would then be three responses:

(i) No one tests whether Methods students do anything like a graphical check, or anything whatsoever.

(ii) Hardly any Methods students do do anything. The overwhelming majority of students treat equations (1), (2) and (2)’ as automatically equivalent, and they have been given explicit license by the Examiners’ Reports to do so. Teachers know this and the VCAA knows this, and any claim otherwise is a blatant lie. And, for any reader still in doubt about what Methods students actually do, here’s a thought experiment: imagine the 2018 Methods exam requires students to solve equation (1) for the function f(x) = (x-2)/(x-1), and then imagine the consequences.

(iii) Even if students were implicitly or explicitly arguing from CAS graphics, “Look at the picture” is an absurdly impoverished way to think about or to teach mathematics, or pretty much anything. The power of mathematics is to be able take the intuition and to either demonstrate what appears to be true, or demonstrate that the intuition is misleading. Wise people are wary of the treachery of images; the VCAA, alas, promotes it.

The real irony and idiocy of this situation is that, with natural conditions on the function f, equation (1) is equivalent to equations (2) and (2)’, and that it is well within reach of Methods students to prove this. If, for example, f is a strictly increasing function then it can readily be proved that the three equations are equivalent. Working through and applying such results would make for excellent lessons and excellent exam questions.

Instead, what we have is crap. Every year, year after year, thousands of Methods students are being taught and are being tested on mathematical crap.

The Madness of Crowd Models

Our fifth and penultimate post on the 2017 VCE exam madness concerns Question 3 of Section B on the Northern Hemisphere Specialist Mathematics Exam 2. The question begins with the logistic equation for the proportion P of a petrie dish covered by bacteria:

    \[\boldsymbol{\frac{{\rm d} P}{{\rm d} t\ }= \frac{P}{2}\left(1 - P\right)\,\qquad 0 < P < 1\,.}\]

This is not a great start, since it’s a little peculiar using the logistic equation to model an area proportion, rather than a population or a population density. It’s also worth noting that the strict inequalities on P are unnecessary and rule out of consideration the equilibrium (constant) solutions P = 0 and P = 1.

Clunky framing aside, part (a) of Question 3 is pretty standard, requiring the solving of the above (separable) differential equation with initial condition P(0) = 1/2. So, a decent integration problem trivialised by the presence of the stupifying CAS machine. After which things go seriously off the rails.

The setting for part (b) of the question has a toxin added to the petri dish at time t = 1, with the bacterial growth then modelled by the equation

    \[\boldsymbol{\frac{{\rm d} P}{{\rm d} t\ }= \frac{P}{2}\left(1 - P\right) - \frac{\sqrt{P}}{20}\,.}\]

Well, probably not. The effect of toxins is most simply modelled as depending linearly on P, and there seems to be no argument for the square root. Still, this kind of fantasy modelling is par for the VCAA‘s crazy course. Then, however, comes Question 3(b):

Find the limiting value of P, which is the maximum possible proportion of the Petri dish that can now be covered by the bacteria.

The question is a mess. And it’s wrong.

The Examiners’ “Report” (which is not a report at all, but merely a list of short answers) fails to indicate what students did or how well they did on this short, 2-mark question. Presumably the intent was for students to find the limit of P by finding the maximal equilibrium solution of the differential equation. So, setting dP/dt = 0 implies that the right hand side of the differential equation is also 0. The resulting equation is not particularly nice, a quartic equation for Q = √P. Just more silly CAS stuff, then, giving the largest solution P = 0.894 to the requested three decimal places.

In principle, applying that approach here is fine. There are, however, two major problems.

The first problem is with the wording of the question: “maximum possible proportion” simply does not mean maximal equilibrium solution, nor much of anything. The maximum possible proportion covered by the bacteria is P = 1. Alternatively, if we follow the examiners and needlessly exclude = 1 from consideration, then there is no maximum possible proportion, and P can just be arbitrarily close to 1. Either way, a large initial P will decay down to the maximal equilibrium solution.

One might argue that the examiners had in mind a continuation of part (a), so that the proportion begins below the equilibrium value and then rises towards it. That wouldn’t rescue the wording, however. The equilibrium solution is still not a maximum, since the equilibrium value is never actually attained. The expression the examiners are missing, and possibly may even have heard of, is least upper bound. That expression is too sophisticated to be used on a school exam, but whose problem is that? It’s the examiners who painted themselves into a corner.

The second issue is that it is not at all obvious – indeed it can easily fail to be true – that the maximal equilibrium solution for P will also be the limiting value of P. The garbled information within question (b) is instructing students to simply assume this. Well, ok, it’s their question. But why go to such lengths to impose a dubious and impossible-to-word assumption, rather than simply asking directly for an equilibrium solution?

To clarify the issues here, and to show why the examiners were pretty much doomed to make a mess of things, consider the following differential equation:

    \[\boldsymbol{\frac{{\rm d} P}{{\rm d} t\ }= 3P - 4P^2 - \sqrt{P}\,.}\]

By setting Q = √P, for example, it is easy to show that the equilibrium solutions are P = 0 and P = 1/4. Moreover, by considering the sign of dP/dt for P above and below the equilibrium P = 1/4, it is easy to obtain a qualitative sense of the general solutions to the differential equation:

In particular, it is easy to see that the constant solution P = 1/4 is a semi-stable equilibrium: if P(0) is slightly below 1/4 then P(t) will decay to the stable equilibrium P = 0.

This type of analysis, which can readily be performed on the toxin equation above, is simple, natural and powerful. And, it seems, non-existent in Specialist Mathematics. The curriculum  contains nothing that suggests or promotes any such analysis, nor even a mention of equilibrium solutions. The same holds for the standard textbook, in which for, for example, the equation for Newton’s law of cooling is solved (clumsily), but there’s not a word of insight into the solutions.

And this explains why the examiners were doomed to fail. Yes, they almost stumbled into writing a good, mathematically rich exam question. The paper thin curriculum, however, wouldn’t permit it.