WitCH 40: The Primary Struggle

This is one of those WitCHes we’re going to regret. Ideally, we’d just write a straight post but we just have no time at the moment, and so we’ll WitCH it, hoping some loyal commenters will do some of the hard work. But, in the end, the thing will still be there and we’ll still have to come back to polish it off.

This WitCH, which fits perfectly with the discussion on this post, is an article (paywalled – Update: draft here) in the Journal of Mathematical Behaviour, titled

Elementary teachers’ beliefs on the role of struggle in the mathematics classroom

The article is by (mostly) Monash University academics, and a relevant disclosure: we’ve previously had significant run-ins with two of the paper’s authors. The article appeared in March and was promoted by Monash University a couple weeks ago, after which it received the knee-jerk positive treatment from education reporters stenographers.

Here is the abstract of the article:

Reform-oriented approaches to mathematics instruction view struggle as critical to learning; however, research suggests many teachers resist providing opportunities for students to struggle. Ninety-three early-years Australian elementary teachers completed a questionnaire about their understanding of the role of struggle in the mathematics classroom. Thematic analysis of data revealed that most teachers (75 %) held positive beliefs about struggle, with four overlapping themes emerging: building resilience, central to learning mathematics, developing problem solving skills and facilitating peer-to-peer learning. Many of the remaining teachers (16 %) held what constituted conditionally positive beliefs about struggle, emphasising that the level of challenge provided needed to be suitable for a given student and adequately scaffolded. The overwhelmingly positive characterisation of student struggle was surprising given prior research but consistent with our contention that an emphasis on growth mindsets in educational contexts over the last decade has seen a shift in teachers’ willingness to embrace struggle.

And, here is the first part of the introduction:

Productive struggle has been framed as a meta-cognitive ability connected to student perseverance (Pasquale, 2016). It involves students expending effort “in order to make sense of mathematics, to figure out something that is not immediately apparent” (Hiebert & Grouws, 2007, p. 387). Productive struggle is one of several broadly analogous terms that have emerged from the research literature in the past three decades. Others include: “productive failure” (Kapur, 2008, p. 379), “controlled floundering” (Pogrow, 1988, p. 83), and the “zone of confusion” (Clarke, Cheeseman, Roche, & van der Schans, 2014, p. 58). All these terms describe a similar phenomenon involving the intersection of particular learner and learning environment characteristics in a mathematics classroom context. On the one hand, productive struggle suggests that students are cultivating a persistent disposition underpinned by a growth mindset when confronted with a problem they cannot immediately solve. On the other hand, it implies that the teacher is helping to orchestrate a challenging, student-centred, learning environment characterised by a supportive classroom culture. Important factors contributing to the creation of such a learning environment include the choice of task, and the structure of lessons. Specifically, it is frequently suggested that teachers need to incorporate more cognitively demanding mathematical tasks into their lessons and employ problem-based approaches to learning where students are afforded opportunities to explore concepts prior to any teacher instruction (Kapur, 2014; Stein, Engle, Smith, & Hughes, 2008; Sullivan, Borcek, Walker, & Rennie, 2016). This emphasis on challenging tasks, student-centred pedagogies, and learning through problem solving is analogous to what has been described as reform-oriented mathematics instruction (Sherin, 2002).

Stein et al. (2008) suggest that reform-oriented lessons offer a particular vision of mathematics instruction whereby “students are presented with more realistic and complex mathematical problems, use each other as resources for working through those problems, and then share their strategies and solutions in whole-class discussions that are orchestrated by the teacher” (p. 315). An extensive body of research links teachers’ willingness to adopt reform-oriented practices with their beliefs about teaching and learning mathematics (e.g., Stipek, Givvin, Salmon, & MacGyvers, 2001; Wilkins, 2008). Exploring teacher beliefs that are related to reform-oriented approaches is essential if we are to better understand how to change their classroom practices to ways that might promote students’ learning of mathematics.

Although teacher beliefs about, and attitudes towards, reform-oriented pedagogies have been a focus of previous research (e.g., Anderson, White, & Sullivan, 2005; Leikin, Levav-Waynberg, Gurevich, & Mednikov, 2006), teacher beliefs about the specific role of student struggle has only been considered tangentially. This is despite the fact that allowing students time to struggle with tasks appears to be a central aspect to learning mathematics with understanding (Hiebert & Grouws, 2007), and that teaching mathematics for understanding is fundamental to mathematics reform (Stein et al., 2008). The purpose of the current study, therefore, was to examine teacher beliefs about the role of student struggle in the mathematics classroom.

The full article is available here, but is paywalled (Update: draft here). (If you really want it …)

It is not appropriate this time to suggest readers have fun. We’ll go with “Good luck”.

UPDATE (28/7)

Jerry in the comments has located a draft version of the article, available here. We haven’t compared the draft to the published version.

WitCH 38: A Deep Hole

This one is due to commenter P.N., who raised it on another post, and the glaring issue has been discussed there. Still, for the record it should be WitCHed, and we’ve also decided to expand the WitCHiness slightly (and could have expanded it further).

The following questions appeared on 2019 Specialist Mathematics NHT, Exam 2 (CAS). The questions are followed by sample Mathematica solutions (screenshot corrected, to include final comment) provided by VCAA (presumably in the main for VCE students doing the Mathematica version of Methods). The examination report provides answers, identical to those in the Mathematica solutions, but indicates nothing further.

UPDATE (05/07/20)

The obvious problem here, of course, is that the answer for Part (b), in both the examination report and VCAA’s Mathematica solutions, is flat out wrong: the function fk will also fail to have a stationary point if k = -2 or k = 0. Nearly as bad, and plenty bad, the method in VCAA’s Mathematica solutions to Part (c) is fundamentally incomplete: for a (twice-differentiable) function f to have an inflection point at some a, it is necessary but not sufficient to have f’’(a) = 0.

That’s all pretty awful, but we believe there is worse here. The question is, how did the VCAA get it wrong? Errors can always occur, but why specifically did the error in Part (b) occur, and why, for a year and counting, wasn’t it caught? Why was a half-method suggested for Part (c), and why was this half-method presumably considered reasonable strategy for the exam? Partly, the explanation can go down to this being a question from NHT, about which, as far as we can tell, no one really gives a stuff. This VCAA screw-up, however, points to a deeper, systemic and much more important issue.

The first thing to note is that Mathematica got it wrong: the Solve function did not return the solution to the equation fk‘ = 0. What does that imply for using Mathematica and other CAS software? It implies the user should be aware that the machine is not necessarily doing what the user might reasonably think it is doing. Which is a very, very stupid property of a black box: if Solve doesn’t mean “solve”, then what the hell does it mean? Now, as it happens, Mathematica’s/VCAA’s screw-up could have been avoided by using the function Reduce instead of Solve.* That would have saved VCAA’s solutions from being wrong, but not from being garbage.

Ask yourself, what is missing from VCAA’s solutions? Yes, yes, correct answers, but what else? This is it: there are no functions. There are no equations. There is nothing, nothing at all but an unreliable black box. Here we have a question about the derivatives of a function, but nowhere are those derivatives computed, displayed or contemplated in even the smallest sense.

For the NHT problem above, the massive elephant not in the room is an expression for the derivative function:

    \[\color{red} \boldsymbol{f'_k(x) = -\frac{x^2 + 2(k+1)x +1}{(x^2-1)^2}}\]

What do you see? Yep, if your algebraic sense hasn’t been totally destroyed by CAS, you see immediately that the values k = 0 and k = -2 are special, and that special behaviour is likely to occur. You’re aware of the function, alert to its properties, and you’re led back to the simplification of fk for these special values. Then, either way or both, you are much, much less likely to screw up in the way the VCAA did.

And that always happens. A mathematician always gets a sense of solutions not just from the solution values, but also from the structure of the equations being solved. And all of this is invisible, is impossible, all of it is obliterated by VCAA’s nuclear weapon approach.

And that is insane. To expect, to effectively demand that students “solve” equations without ever seeing those equations, without an iota of concern for what the equations look like, what the equations might tell us, is mathematical and pedagogical insanity.

 

*) Thanks to our ex-student and friend and colleague Sai for explaining some of Mathematica’s subtleties. Readers will be learning more about Sai in the very near future.

WitCH 37: A Foolproof Argument

We’re amazed we didn’t know about this one, which was brought to our attention by commenter P.N.. It comes from the 2013 Specialist Mathematics Exam 2: The sole comment on this question in the Examination Report is:

“All students were awarded [the] mark for this question.”

Yep, the question is plain stuffed. We think, however, there is more here than the simple wrongness, which is why we’ve made it a WitCH rather than a PoSWW. Happy hunting.

UPDATE (11/05) Steve C’s comment below has inspired an addition:

Update (20/05/20)

The third greatest issue with the exam question is that it is wrong: none of the available answers is correct. The second greatest issue is that the wrongness is obvious: if z^3 lies in a sector then the natural guess is that z will lie in one of three equally spaced sectors of a third the width, so God knows why the alarm bells weren’t ringing. The greatest issue is that VCAA didn’t have the guts or the basic integrity to fess up: not a single word of responsibility or remorse. Assholes.

Those are the elephants stomping through the room but, as commenters as have noted, there is plenty more awfulness in this question:

  • “Letting” z = a + bi is sloppy, confusing and pointless;
  • The term “quadrant” is undefined;
  • The use of “principal” is unnecessary;
  • “argument” is better thought as the measure of an angle not the angle itself;
  • Given z is a single complex number, “the complete set of values for Arg(z)” will consist of a single number.
  • The grammar isn’t.

WitCH 36: Sub Standard

This WitCH is a companion to our previous, MitPY post, and is a little different from most of our WitCHes. Typically in a WitCH the sin is unarguable, and it is only the egregiousness of the sin that is up for debate. In this case, however, there is room for disagreement, along with some blatant sinning. It comes, predictably, from Cambridge’s Specialist Mathematics 3 & 4 (2020).

WitCH 35: Overly Resolute

This WitCH (arguably a PoSWW) comes courtesy of Damien, an occasional commenter and an ex-student of ours from the nineteenth century. It is from the 2019 Specialist Mathematics Exam 2. We’ll confess, we completely overlooked the issue when going through the MAV solutions.

Update (16/02/20)

What a mess. Thanks to Damo for pointing out the problem, and thanks to the commenters for figuring out the nonsense.

In general form, the (intended) scenario of the exam question is

The vector resolute of \boldsymbol{\tilde{a}} in the direction of \boldsymbol{\tilde{b}} is \boldsymbol{\tilde{c}},

which can be pictured as follows: For the exam question, we have \boldsymbol{\tilde{a}} = \boldsymbol{\tilde{i}} + \boldsymbol{\tilde{j}} - \boldsymbol{\tilde{k}}, \boldsymbol{\tilde{b}} = m\boldsymbol{\tilde{i}} + n\boldsymbol{\tilde{j}} + p\boldsymbol{\tilde{k}} and \boldsymbol{\tilde{c}} = 2\boldsymbol{\tilde{i}} - 3\boldsymbol{\tilde{j}} + \boldsymbol{\tilde{k}}.

Of course, given \boldsymbol{\tilde{a}} and \boldsymbol{\tilde{b}} it is standard to find \boldsymbol{\tilde{c}}. After a bit of trig and unit vectors, we have (in most useful form)

\boldsymbol{\tilde{c} = \left(\dfrac{\tilde{a}\cdot \tilde{b}}{\tilde{b}\cdot \tilde{b}}\right)\tilde{b}}

The exam question, however, is different: the question is, given \boldsymbol{\tilde{a}} and \boldsymbol{\tilde{c}}, how to find \boldsymbol{\tilde{b}}.

The problem with that is, unless the vectors \boldsymbol{\tilde{a}} and \boldsymbol{\tilde{c}} are appropriately related, the scenario simply cannot occur, meaning \boldsymbol{\tilde{b}} cannot exist. Most obviously, the length of \boldsymbol{\tilde{c}} must be no greater than the length of \boldsymbol{\tilde{a}}. This requirement is clear from the triangle pictured, and can also be proved algebraically (with the dot product formula or the Cauchy-Schwarz inequality).

This implies, of course, that the exam question is ridiculous: for the vectors in the exam we have |\boldsymbol{\tilde{c}}| > |\boldsymbol{\tilde{a}}|, and that’s the end of that. In fact, the situation is more delicate; given the pictured vectors form a right-angled triangle, we require that \boldsymbol{\tilde{a}} - \boldsymbol{\tilde{c}} be perpendicular to \boldsymbol{\tilde{c}}. Which implies, once again, that the exam question is ridiculous.

Next, suppose we lucked out and began with \boldsymbol{\tilde{a}}- \boldsymbol{\tilde{c}} perpendicular to \boldsymbol{\tilde{c}}. (Of course it is very easy to check whether we’ve lucked out.) How, then, do we find \boldsymbol{\tilde{b}}? The answer is, as is made clear by the picture, “Well, duh”. The possible vectors \boldsymbol{\tilde{b}} are simply the (non-zero) scalar multiples of \boldsymbol{\tilde{c}}, and we’re done. Which shows that the mess in the intended solution, Answer A, is ridiculous.

There is a final question, however: the exam question is clearly ridiculous, but is the question also stuffed? The equations in answer A come from the equation for \boldsymbol{\tilde{c}} above and working backwards. And, these equations correctly return no solutions. Moreover, if the relationship between \boldsymbol{\tilde{a}} and \boldsymbol{\tilde{c}} had been such that there were solutions, then the A equations would have found them. So, completely ridiculous but still ok?

Nope.

The question is framed from start to end around definite, existing objects: we have THE vector resolute, resulting in THE values of m, n and p. If the VCAA had worded the question to find possible values, on the basis of a possible direction for the resolution, then, at least technically, the question would be consistent, with A a valid answer. Still an utterly ridiculous question, but consistent. But the VCAA didn’t do that and so the question isn’t that. The question is stuffed.

Further Update (26/06/20)

As commenters have noted, the Examination Report has finally appeared. And, as predicted, answer A was deemed correct, with the Report noting

Option A gives the set of equations that can be used to obtain the values of m, n and p. Explicit solution would result in a null set as it is not possible for a result of a vector to be of greater magnitude than the vector itself.

Well, it’s something. Presumably “result of a vector” was intended to be “resolute of a vector”, and the set framing is weirdly New Mathy. But, it’s something. Seriously. As John Friend notes, it is at least a small step along the way to indicating the question is not all hunky-dory.

That step, however, is way too small. We’ll close with two comments, reiterating the points made above.

1. The question is wrong

Read the question again, and read the first sentence of the Report’s comment. The question and report justification are fundamentally stuffed by the definite articles, by the language of existence. All answers should have been marked correct.

2. The question is worse than wrong

Even if the vectors \boldsymbol{\tilde{a}} and \boldsymbol{\tilde{c}} had been chosen appropriately, the question is utterly devoid of mathematical sense. It suggests a long and difficult method to solve a problem that, if indeed is solvable, is trivial. 

 

 

WitCHes in Batches

What we like about WitCHes is that they enable us to post quickly on nonsense when it occurs or when it is brought to our attention, without our needing to compose a careful and polished critique: readers can do the work in the comments. What we hate about WitCHes is that they still eventually require rounding off with a proper summation, and that’s work. We hate work.

Currently, we have a big and annoying backlog of unsummed WitCHes. That’s not great, since a timely rounding off of discussion is valuable. Our intention is to begin ticking off the unsummed WitCHes, which are listed below with brief indications of the topics. Most of these WitCHes have been properly hammered by commenters, though of course readers are always welcome to comment, including after summation. We’ll update this post as the WitCHes get ticked off. Thanks very much to all past WitCH-commenters, and we’re sorry for the delay in polishing off. We’ll attempt to keep on top of future WitCHes.

WitCH 8 (oblique asymptotes – UPDATED 05/02/20)

WitCH 10 (distance function – UPDATED 29/03/20)

WitCH 12 (trig integral)

WitCH 18 (Serena Williams)

WitCH 20 (hypothesis testing)

WitCH 21 (order of algorithms)

WitCH 22 (inflection points)

WitCH 23 (speed functions)

WitCH 24 (functional equations)

WitCH 25 (probability distributions)

WitCH 26 (function composition)

WitCH 27 (function composition – UPDATED 15/06/20)

WitCH 28 (trig graphs)

WitCH 29 (inverse derivatives – UPDATED 19/06/20)

WitCH 30 (Eddie Woo)

WitCH 31 (function composition)

WitCH 32 (PISA)

WitCH 33 (probability distributions)

WitCH 34 (numeracy guide – added 05/02/20)

WitCH 35 (vector resolutes – added 14/02/20 – UPDATED 16/02/20)

WitCH 36 (integration by substitution – added 21/04/20)

WitCH 37 (complex argument – added 11/05/20 – UPDATED 20/05/20)

WitCH 38 (stationary points – added 20/06/20)

WitCH 33: Below Average

We’re not actively looking for WitCHes right now, since we have a huge backlog to update. This one, however, came up in another context and, after chatting about it with commenter Red Five, there seemed no choice. The following 1-mark multiple choice question appeared in 2019 Exam 2 (CAS) of VCE’s Mathematical Methods. The problem was to determine Pr(X > 0), the possible answers being

A. 2/3      B. 3/4      C. 4/5      D. 7/9      E. 5/6

Have fun.

Update (04/07/20)

Who writes this crap? Who writes such a problem, who proofreads such a problem, and then says “Yep, that’ll work”? Because it didn’t work, and it was never going to. The examination report indicates that 27% of students gave the correct answer, a tick or two above random guessing.
 
We’ll outline a solution below, but first to the crap. The main awfulness is the double-function nonsense, defining the probability distribution \boldsymbol{f} in terms of pretty make the same function \boldsymbol{p}. What’s the point of that? Well, of course \boldsymbol{f} is defined on all of \boldsymbol{R} and \boldsymbol{p} is only defined on \boldsymbol{[-a,b]}. And, what’s the point of defining \boldsymbol{f} on all of \boldsymbol{R}? There’s absolutely none. It’s completely gratuitous and, here, completely ridiculous. It is all the worse, and all the more ridiculous, since the function \boldsymbol{p} isn’t properly defined or labelled piecewise linear, or anything; it’s just Magritte crap. 
 
To add to the Magritte crap, commenter Oliver Oliver has pointed out the hilarious Dali crap, that the Magritte graph is impossible even on its own terms. Beginning in the first quadrant, the point \boldsymbol{(b,b)} is not quite symmetrically placed to make a 45^{\circ} angle. And, yeah, the axes can be scaled differently, but why would one do it here? But now for the Dali: consider the second quadrant and ask yourself, how are the axes scaled there? Taking a hit of acid may assist in answering that one.
 
Now, finally to the problem. As we indicated, the problem itself is fine, its just weird and tricky and hellishly long. And worth 1 mark. 
 
As commenters have pointed out, the problem doesn’t have a whole lot to do with probability. That’s just a scenario to give rise to the two equations, 
 
1) \boldsymbol{a^2 \ +\ \frac{b}{2}\left(2a+b\right) = 1} \qquad      \mbox{(triangle + trapezium = 1).}
 
and
 
2) \boldsymbol{a + b = \frac43} \qquad           \mbox{( average = 3/4).}
 
The problem is then to evaluate
 
*) \boldsymbol{\frac{b}2(2a + b)} \qquad \mbox{(trapezium).}
 
or, equivalently, 
 
**) \boldsymbol{1 - a^2 \qquad} \mbox{(1 - triangle).}
 
 
The problem is tricky, not least because it feels as if there may be an easy way to avoid the full-blown simultaneous equations. This does not appear to be the case, however. Of course, the VCAA just expects the lobotomised students to push the damn buttons which, one must admit, saves the students from being tricked.
 
Anyway, for the non-lobotomised among us, the simplest approach seems to be that indicated below, by commenter amca01. First multiply equation (1) by 2 and rearrange, to give
 
3) \boldsymbol{a^2 + (a + b)^2 = 2}.
 
Then, plugging in (2), we have 
 
4) \boldsymbol{a^2 = \frac29}.
 
That then plugs into **), giving the answer 7/9. 
 
Very nice. And a whole 90 seconds to complete, not counting the time lost making sense of all the crap.