Having fixed maths education and having run out of things to say, we’re open for suggestions.
Yeah, well, not really, or even close. We have, however, said all we plan to say on ACARA and their ridiculous curriculum, at least until whatever happens happens. And, although our to-do list runs to several volumes, with some to-dos kind of pressing, there is now, finally some space for choice. So, if there is something you wish us to write upon, some WitCH you particularly wish to see updated, whatever, suggestions are welcome. They’ll be ignored, but they’re welcome.
“Where and when did it all go wrong?”
Looking at a 1966 Mathematics exam compared to a 2016 Mathematics exam in the same state, with mostly the same schools being examined, it seems to me that a lot has changed in 50 years.
In 1966 there were less questions on less topics but much greater depth.
So where and when did it all go wrong?
If anyone has a complete run of Matriculation/HSC/VCE exam papers from 1966 to 2016 and wants to share them… that would also be really great.
When did it all go wrong?
Many were increasingly of the opinion that they’d all made a big mistake coming down from the trees in the first place, and some said that even the trees had been a bad move, and that no-one should ever have left the oceans.
Even those with digital watches?
But seriously though… somewhere in the last 50 years things went pretty wrong. From my very brief investigations, it seems to pre-date CAS calculators.
Yes, more seriously I’ve been reading a *lot* about the curriculum in the 70s, and the curriculum war in the 90s. World War I was very different, with different and strange alliances, and things had clearly already gotten very screwy. I’m planning to write about it, but it’s difficult to get clear in my head, let alone write it clearly.
With the old exams, I am not sure the manner in which these might be shared, and I don’t want people posting them here, but I’ll contemplate.
The state records office has most of them with maybe two years missing.
I’m not seeking to bash “new math” either – but that is a good example of maths-ed getting in and screwing things up; the lessons of which have clearly not been learned…
VUSEB (pre-1978) –> VISE (1979 – 1988) –> VCAB (1989 – 1993) –> BOS –> VBOS –> (1994 – 1999) –> VCAA (2000 – )
RF, I think the attached paper (“Influences …” – Horton) sheds some light on your question in its opening section.
Influences of Secondary Mathematics Curriculum in Victoria
And for a decent historical summary of the mathematics curriculum in Victoria from VUSEB onwards, see attached paper (“Paper 2 – Working Towards …” – author unknown)
It’s not hard to see when and why the Victorian mathematics curriculum jumped the shark. (Just as interesting is when and why the NSW mathematics curriculum grew the beard).
Paper 2-Working Towards Change-VCE Mathematics
Many, many thanks.
That final sentences really sum it all up:
“Thereafter, the role of mathematicians and mathematics educators
would seem to be one of expert adviser to bureaucracy, within the framework of terms of
reference decided by the bureaucracy. The decision as to who constitutes the expert, and
whether advice is accepted, now appears to rest with the bureaucracy.”
I was thinking the problem was older than this for some reason.
Indeed. Those sentences really jump out and hit the nail on the head.
I also think the opening sentences are very telling:
“The creation of the Victorian Curriculum and Assessment Board marked a significant departure from prior approaches to curriculum determination. Responsibility for the development of senior secondary mathematics curriculum was entrusted to a select group of mathematics educators.”
Together with the sentence in the opening section:
“This board was to usher in the era of School Based Curriculum Development, and the focus of curriculum development shifted towards professional educators and teachers.”
This is very clearly the thin edge of the wedge and the “select group” got ‘corrupted’ over time.
It all seemed to happen quite quickly though, from what I’m seeing of the available curriculum documents – the “optional modules”, the splitting into multiple subjects, replacing university mathematicians with “mathematics educators” (I notice they distinguish between professional educators and teachers – I wonder what the difference is…)
“An expert is one who knows more and more about less and less until [s/he] knows absolutely everything about nothing.” – Nicholas Butler.
These *ahem* ‘professional educators’ would be people with a PhD in education. They are the above-mentioned experts.
No idea who first said this, but I do like it.
The trouble with these jokes is that there is such a thing as an expert, and a valuable expert. Why they are so rare, if even existent, in education is a specific question, and it should be addressed in a specific manner.
The answer is very simple. Compare what you have to do in, say, mathematics or physics to EARN a PhD versus what you can get away with doing in education to get GIVEN a PhD … You can ‘research’ any old bullshit in education and get a PhD for it … (ditto psychology but it mostly runs a distant second to education, except when they get into bed together). It’s an absolute cottage industry (with education academics ticking the ‘Justify My Existence’ box by supervising PhD candidates and propagating the bullshit) that exists in order to justify its existence.
You could get rid of all these so-called education experts, have prospective teachers serve a paid 2-year apprenticeship (after having completed at least an appropriate 3 year degree), and watch teaching standards rise. They give real and valuable experts a bad name.
I’d like to make a comment here. Maybe you think it isn’t a relevant point to make (and I would accept that).
I think that people with an education PhD *are* experts. They are experts in the academic field currently known as education (or one of its subfields).
I’ll add a second barrel. I think taking on all of education is not going to be a fight that anyone can win. I’m not sure what “winning” would even look like, and given a few possibilities I probably don’t want to win it. What I would prefer is that the education academics are kept away from school curriculum, that is a much smaller battle and one worth fighting (in my opinion).
Glen, I know you’re not doing a Terry, but I’m honestly not sure what you’re arguing.
Let’s leave aside interpretations of the word “experts”. What you and I want is for these education academics to be kept away from school curricula. How can you conceive of that happening without “taking on all of education”? I cannot.
I can also see absolutely no downside in fighting and winning against education, if “education” means the modern, entrenched approach to education. I can see no choice.
I think you are correct, that I/we have no chance of winning. But I see it as the only real fight. Yes, other battles are worth effort, but these battlefields are all a consequence of the fundamental perversion of education, and the destruction of any culture of attention, discipline and learning.
If I’m gonna go down, I’m gonna go down fighting the real fight.
Re: “I think that people with an education PhD *are* experts. They are experts in the academic field currently known as education (or one of its subfields).”
Glen, you’re right. They are experts in the truest sense. And the subfield of education that they are experts in is some bullshit subset of education. Usually created either by themselves, a colleague or a mentor. Then they stick their noses into things that they are NOT experts on, pretending that they
experts by a dodgy extrapolation from that subset.
There are experts in all sorts of bullshit, with the PhD to prove it. The problem is when some expert with a PhD in ‘The moon is made of green cheese’ theory claims – and is believed – to be an expert in planetary science and then gets appointed to some panel with a brief to review planetary science policy.
Related to experts on green cheese are the experts on pushing buttons on a CAS calculator who, by extrapolation, pass themselves off as experts on mathematics and get themselves appointed to exam setting panels, curriculum advisory panels, textbook writing teams, exam vetting, key note speaking etc. The trouble is the influence these experts begin to have on things they know nothing about. And its teachers as a collective that let this happen.
Yes, but that’s not my point. My point is that jokes about “experts” miss the point.
Yes, these jokes paint all experts with the same brush. I apologise for being so glib, because I agree that “there is such a thing as an expert, and a valuable expert”.
The problem is that the term ‘expert’ has been hijacked by muppets to give themselves legitimacy and relevance. The term has lost its true meaning thanks to muppets like those who call themselves ‘education experts’.
No need to apologise, and I don’t want to be a humourless dick. Of course you are correct, that “expert” has been hijacked. But the problem is not just naive trust in official experts; there is as much of a problem in the smug dismissal of genuine experts.
The underlying problem is that no one wants to think.
Horwood. Thanks, John. Very interesting paper.
Yeah, my apologies for the “expert” quip as well.
I’m not convinced the problem can be entirely reduced to “no one wants to think” although this is a major part.
Perhaps there is also a “no one wants to listen, ask the difficult questions, accept the idea that they may have to change their ways… insert a bunch of other excuses” reason.
And perhaps the reason for the rarity is self-fulfilling. As JF pointed out, the decision about who gets to be a PhD in Education is made by a set/group/ring of other education PhDs who are, most likely, wanting to preserve their own importance.
Something along the lines of Kuhn’s “paradigm” view of the history of science.
But I could be wrong.
Well, it looks like this blog has developed a mind of its own and is virtually writing
… We can’t have that. So I’m going to keep in the spirit of this blog and offer a new suggestion …
We all know that the confidence interval stuff in the Maths Methods Stupid Design is bullshit. But even bullshit can be OK if done properly. So I suggest a blog on why the confidence interval bullshit in Maths Methods is
bullshit. To make things interesting, provocative and mathematical, and to justify why I think such a blog might be useful, below are three key points to ponder:
1) There are several different types of confidence intervals for the population proportion. The one chosen by VCAA is the poorest. It has many weaknesses, some are very obvious and some are less obvious.
2) Better confidence intervals could be used. Their derivation (see 3) below) is well-within the scope of the course and the use of the ubiquitous CAS technology makes their calculation simple. I would argue that the calculation of such intervals provides a more meaningful use of the technology.
3) The derivation of the confidence interval chosen by VCAA is poorly (and incorrectly) done in textbooks. The Stupid Design doesn’t even mention a derivation. The lack of a requirement for a (valid) derivation is a major impediment to understanding where the better confidence intervals come from.
(I’ve attached a 1011 page summary I came across that expands on the above points – for personal use only please)
Declaration of self-interest: I’ve previously mentioned the idea of a blog on this to Marty. It’s possible it’s one of the things on his to-do list now that ACARA is (for the moment, at least …) on the back-burner. The only trouble is that the attachment may leave nothing more to say …
Derivation of confidence interval formulae for a population proportion
Thanks, John. I don’t mind if this post becomes a sand pit for whatever. Your confidence interval thing is on my “do soon” list, but you would be aware of my definition of “soon”.
Indeed. When I get a chance (on my do soon too list), I’ll do my best to uncover VCAA’s justification for using this particular confidence interval. But I think the answer(s) will simply be because:
1) It’s the one used in all the standard textbooks (stupidity by default),
2) It’s the only one. See 1). (Stupidity borne from ignorance),
3) We don’t need a justification (stupidity borne from arrogance).
A very interesting read, JF. Thanks.
I will openly admit that by the time this topic rolls around every year I have settled into the “let’s learn this stuff the way VCAA tests it and not think much more about it” mindset, which professionally is perhaps a bad thing, but from a “get the best exam score for students” perspective is hopefully OK. (Actually, I know it is; assuming exam scores are used as the sole measure of understanding.)
Getting back to the question of what should Marty write about – the more I think about it the more I come to the conclusion that although VCAA makes horrible errors and VCE textbooks are full of WiTCH material, it is the lower-down content (PISA, TIMSS, ACARA) where there is still plenty of crap, but perhaps a self-respecting teacher or two reading this might be spurred into action in their own schools; by VCE it is too late in many cases.
Just a thought.
RF, I agree that the real damage is done lower down. It’s the main reason I focussed on the primary level stuff in the draft curriculum. I don’t believe that PISA and NAPLAN, crap that they are, have been the cause in the past. But, I think I they may be a cause in the future. PISA is clearly being used as an argument for the modelling-investigation nonsense in the draft. TIMSS is not nonsense. If people looked at what TIMSS tests, and what it indicates students cannot do, that could be a force for improvement.
Thanks, RF. I think it’s very interesting too. I wonder how many maths teachers know that there’s more than one type of CI for a population proportion …? And that the one typically given in the textbooks and the VCAA Stupid Design is the poorest one …?
I can’t blame teachers for not knowing – it’s not exactly common knowledge and there’s nothing in the textbooks. Nevertheless, I wonder how many teachers have calculated a CI that has an endpoint greater than 1 and simply shrugged their shoulders …? Or wondered what happens if the sample proportion happens to be zero (or 1) and then just shrugged their shoulders …?
The justification that it’s the simplest one to calculate has no credibility in the age of the ubiquitous CAS. And there’s no emphasis on derivation in the Stupid Design, so the justification that the derivations of the other ones are too complicated doesn’t wash either. Some half-wit made a stupid decision.
Blind acceptance is the enemy of learning. (How many maths teachers tell their students NOT to do this, but then do it themselves all the time?)
Re: Teaching CI’s in Methods. I honestly but shamefully admit that I do the same (there’s no time to approach it any other way, particularly in 2021).
The calculation of the confidence interval is trivial on a CAS and requires no understanding at all. VCAA’s weak response to this is typically to include a 1 mark hypothesis testing question based on the confidence interval (although the words hypothesis testing are not used in the Methods exams or Stupid Design).
But I always do a couple of examples on finding things like sample size, level of confidence or sample proportion from a given confidence interval (and other given pieces of required information). But these are just mathematical tricks anticipating the rubbish VCAA might ask.
I try to do better with the CI for a population mean in Specialist Maths. But pragmatism still usually gets in the way of idealism.
Such things can be attempted in SACs if one is brave enough (or is by some miracle able to schedule the SAC before the end of Term 3 and not piss off a bunch of other subject teachers who also have SACs at that time but usually more students and so win the fight over scheduling… (OK, cooling down now))
And yes, Marty, I do agree that TIMSS leaves PISA for dust when it comes to Mathematics. Unfortunately, in every school I have been in, NAPLAN results are the focus (after VCE results) and, well… NAPLAN is not TIMSS.
Meta comment: I can’t reply to either Marty or John up above. Is that intentional? What is the intended practice — should I just re-comment on the main post? Or re-comment on the deepest one possible? Sorry if this is obvious to everyone else.
The latter (within the relevant thread).
Yeah, it’s intentional. We’re all sick of your “Education PhDs are experts” nonsense. (Or, there might be a maximum thread depth.)
Perhaps something to write about could be about what knowledge or expertise a maths curriculum writer should have? As a kind of self-help guide for people who have that job?
(I think it would need to be pretty specific because people throw the word ‘mathematician’ around without it meaning anything.)
You mean because the curriculum writers will be actively searching this bog for helpful ideas?
Was that a deliberate typo Marty – “bog” instead of “blog”?
If so, very funny (if inaccurate).
No, it was a typo. But it’s funny: I’ll leave it.
Maths Curriculum Writing Guide For Dummies …?
Idiot’s Guide to Maths Curriculum Writing …?
Maths Curriculum Writing DeMystified …?
If these people who have the job need a self-help guide, they should
have the job in the first place. Period.
A guide on how to
maths curriculum writers – for the buffoons who make such a mess of this decision time and time again – is what’s required.
I’d like some tips on how I can get out of this chickenshit outfit and start my own country without all the dumbshits.
You could start Start Steve’s Hutt River Province.
Will there be a discussion of 2021 Further Mathematics examinations?
No.