*This is our final excerpt from *Teaching Mathematics at Secondary Level* Tony Gardiner’s 2016 commentary and guide to the English Mathematics Curriculum. (The first two excerpts are here and here.) **It is a long and beautifully clear discussion of the nature of problem-solving, and its proper place in a mathematics curriculum (pp 63-73). (For Australia’s demonstration of improper placement, see here, here and here.)*

*Gardiner frames his discussion around four bullet points from the English Key Stage 3 (early secondary) Program of Study:*

**Solve problems **

*develop their mathematical knowledge, in part through solving problems and evaluating the outcomes, including multi-step problems**develop their use of formal mathematical knowledge to interpret and solve problems, including in financial mathematics**begin to model situations mathematically and express the results using a range of formal mathematical representations**select appropriate concepts, methods and techniques to apply to unfamiliar and non-routine problems.*

*****************************************************

These four bullet points are clearly meant to encourage pupils and teachers to see school mathematics as more than endless practise with dry-as-dust formal technique. But beyond this admirable aspiration, it is far from clear what exactly is being advocated. We base our commentary on three questions.

- What is meant by a “problem”, rather than (say) an “exercise”?
- What does it mean to “solve problems”?
- And why are “multi-step” problems important?

**2.3.1**

We begin by clarifying the distinction between “exercises” and “problems”.

An **exercise **is a task, or a collection of tasks that provide *routine practice *in some technique or combination of techniques. The techniques being exercised will have been explicitly taught, so the meaning of each task should be clear. Each sequence of exercises is designed to cultivate fluency in using the relevant techniques, and all that is required of pupils is that they implement the procedures more-or-less as they were taught in order to produce an answer. The overall goal of such a sequence of exercises is merely to establish mastery of the relevant technique in a suitably robust form. In particular, a well-designed set of exercises should help to avoid, or to eliminate, standard misconceptions and errors.

Exercises are not meant to be particularly exciting, or especially stimulating. But they can give pupils a quiet sense of satisfaction. Without a regular diet of suitable *exercises*, ranging from the simple to the suitably complex (including standard variations), pupils are likely to lack the repertoire of basic techniques they need in order to make sense of mildly more challenging tasks …. In other words,

*exercises *are the bread-and-potatoes of the mathematics curriculum.

Pupils in England clearly need more (carefully prepared) “bread-and-potatoes” exercises than they currently get. However, bread and potatoes alone do not constitute a healthy diet. Pupils also need more challenging activities both to whet their mathematical appetites, and to cultivate an inner willingness to tackle, and to persist with, simple but unfamiliar (or “non-routine”) **problems**. A *problem *is any task which we do not immediately recognise as being of a familiar type, and for which we therefore know no standard solution method. Hence, when faced with a *problem*, we may at first have no clear idea how to begin.

The first point to recognise is that a task does not have to be all that unfamiliar before it becomes a *problem *rather than an *exercise*! In the absence of an explicit problem solving culture, an exercise may appear to the pupil to be a *problem *simply because its solution method has not been mentioned for a week or so, or because it is worded in a way which fails to announce its connection with recent work. The second point is that the distinction between a *problem *and an *exercise *is not quite as clear-cut as we have made it look, and is to some extent time- and pupil-dependent. For example, an “I’m thinking of a number” *problem *from Year 5 or Year 6 should by Year 8 be seen to be a mere *exercise *in setting up and solving a simple equation.

Most useful techniques involve a *chain *of simple steps, and the technique as a whole is only an effective tool if *the complete chain *can be carried out **entirely reliably**—a requirement which may only be achieved after extensive practice. Examples include: any of the standard written algorithms; the process of turning a fraction into a decimal; the sequence of steps required to add or subtract two fractions, or to solve an equation or inequality, or to multiply out and simplify an algebraic expression. Hence each set of *exercises *should include tasks that force pupils to think a little more flexibly, and that require them to string simple steps together in a reliable way. Too many sets of *exercises *get stuck at the level of “one piece jigsaws”—with one-step routines being practised in isolation, ignoring key variations. Pupils need to learn from their everyday experience that the whole purpose of achieving fluency in routine bread-and-potatoes *exercises *is for them to learn to marshal these techniques to solve more demanding **multi-step ***exercises*, and more interesting, if mildly unsettling, *problems*.

**2.3.2 **

This distinction between *exercises *and *problems *affects how we choose to introduce each new topic or technique. Should we concentrate on relatively simple examples that minimise pupil difficulties, and which seem likely to guarantee a quick pay-off? Or should we—when working with the whole class—move quickly on to examples that provide a significant challenge, and so require pupils from the outset to grapple with (carefully chosen) tasks of a more demanding nature?

How challenging one can safely be will depend on the pupils. But experience from those who observe lessons in other countries suggests that the English preference for concentrating the initial worked examples on easy cases **increases the extent of subsequent failure**. Easy initial examples lead to cheap apparent success; but this initial pupil success may be based on pupils’ own inferred methods that appear to work in easy cases, but which are flawed in some way; or on backward-looking methods, that seem (to the pupil) to work in simple instances, but which do not extend to the general case. So we need to consider the benefits of starting each new topic with a harder “class problem” that brings out the full complexity of the method that we want pupils to master, and then to follow this up with *exercises *that may start simply, but which oblige pupils to think flexibly from the outset, and to handle standard variations including inverse problems.

**2.3.3 **

The last 30 years have witnessed a consistent concern about pupils’ ability to “use” the elementary mathematics they are supposed to know. Previous versions of the [English] mathematics National Curriculum displayed an admirable determination to incorporate “Using and applying” within teaching and assessment. But such determination is not enough. The experience of the last 25 years in England is more useful as a guide to what does **not **work than to what does work. Much effort has been expended in trying to do better—but with limited effect. In particular, ambitious attempts to coerce change—using extended investigations, coursework, and “modelling”—have mostly served to demonstrate what should **not **be officially required at this level.

Somewhere along the line we seem to have lost sight of simple **word problems**. *Word problems *typically consist of two or three short sentences, from which pupils are required

- to extract the intended meaning and any required information,
- to identify what needs to be done,

and then

- to carry it out, and interpret the answer in the context of the problem.

Everyday uses of elementary mathematics tend to come in some variation of this form. Yet the simplest exercises, which might be solved routinely if they were presented *without words*, become powerful discriminators when given this gentle packaging. The need for pupils to read and extract the relevant data from two or three English sentences may appear routine—but it is a skill that has to be learned the hard way, and that constitutes the initial stepping-stone *en route *to the ultimate solution of almost any problem. This simple format can be tweaked to cover the standard variations of the underlying task (e.g. so that it appears both in *direct *and in the various *indirect *forms).

During Key Stage 1 [the first two years of schooling] *word problems *are important because they reflect the fundamental links between

- the world of mathematical ideas and mathematical reasoning,

and

- the world of language.

Indeed, for young children, the *logic *of mathematics is inextricably bound up with the *grammar *of language.

At later stages *word problems *continue to serve as an invaluable way of linking the increasingly abstract world of mathematics and the world where its ideas can be applied. That is, they constitute the simplest exercises and problems in any programme that seeks to ensure that elementary mathematics can be used.

The suggestion that improving mathematical literacy depends on rediscovering the world of carefully structured word problems is both more ambitious and more modest than what has been attempted in recent English reforms.

- It is
*more*ambitious in that the evidence from other countries shows just how much more we might achieve were we to incorporate a*permanent thread*of such focused material from the earliest years. - It is more modest in that it explicitly encourages
*more focused*(and hence more manageable) tasks—short problems with a clearly specified beginning and end, but with the path from one to the other left for the solver to devise. Such problems have “closed” beginnings and “closed” ends, but are**open-middled**. Almost any mental arithmetic problem, or word problem, might serve as an example. Suppose we ask:

“*I pack peaches in *51 *boxes with *16 *peaches in each box.*

**How many boxes would I use if each box contained just ****12** **peaches?**”

What is given and what is required is “closed”—i.e. specified uniquely. But the mode of solution is left entirely open:

- some pupils might calculate the total number of peaches and then divide by 12;
- one would prefer to see a more structural version of this representing

the total number of peaches as “” without evaluating, and the

required number of boxes as before cancelling - others might notice that , and look for the number satisfying ““;
- while some might remove 4 peaches from each of the 51 boxes and

group the 4s in groups of to get 17 additional boxes.

**2.3.4 **

Pupils need a regular diet of problems and activities designed to strengthen the link between elementary mathematics on the one hand and its application to simple problems from the wider world on the other. *Word problems *are only a beginning.

Some have advocated using “real-world” problems. But though these may have a superficial appeal, their educational utility is limited. Problems which support the move towards using and applying beyond the limited world of *word problems *need to be very carefully constructed, so that the real context truly reflects the mathematical processes pupils are expected to use as part of their solution. (Problems which have to be carefully designed in this way are sometimes called “realistic”.)

The related claim that technology allows pupils to work with “real-world problems” and with “real (or ‘dirty’) data” becomes important once the underlying ideas have been grasped. However, for relative beginners the claim too often ignores the distracting effect of the *noise *which is created by “real” contexts, by “real” data, and by the non-mathematical interface that so easily prevents pupils from grasping the underlying mathematical message.

**2.3.5 **

The official programme of study makes repeated reference to the need to solve **multi-step **problems. A *multi-step *problem is like a challenge to cross a stream that is *too wide to straddle with a single jump*, so that the prospective solver is obliged to look for stepping-stones—intermediate points which reduce the otherwise inaccessible challenge of crossing from one bank (what is given) to the other (the completed solution) to *a chain of individually manageable step*s. In elementary mathematics, this art has to be learned the hard way. It should not be seen as optional, or as a matter of taste. It is central to what elementary mathematics is about, and to how it is used.

One might think that—given the original emphasis on *Using and applying*—this goal has been an integral part of the [English] National Curriculum since its inception. But that is not quite true—for we have too often confused

- “solving problems”, and tackling “multi-step” problems with
*real-world*problems, and*extended*tasks.

The limitations of “real-world” problems were outlined in the previous Section 2.3.4. An *extended *task allows pupils considerable freedom, and can be beneficial precisely because the outcomes lie to some extent outside the teacher’s control. However, this lack of predictability and control means that extended tasks are **not **an effective way for most pupils to *learn *the art of solving *multi-step *problems. For most teachers, this art is much more effectively addressed through **short**, easily stated problems in a specific domain (such as number, or counting, or algebra, or Euclidean geometry), where

- what is given and what is required are both clear,
- but the route from one to the other requires pupils to identify one or more intermediate stepping-stones (that is, they are “open-middled”)—as with
- solving a simple number puzzle, or
- interpreting and solving word problems, or
- proving a slightly surprising algebraic identity, or
- angle-chasing (where a more-or-less complicated figure is described and has to be drawn, with some angles given and some sides declared to be equal, and certain other angles are to be found—using the basic repertoire of angles on a straight line, vertically opposite angles, angles in a triangle, and base angles of an isosceles triangle), or

- proving two line segments or two angles are equal, or that two triangles are congruent (where the method of proof is not immediately apparent).

The steps in the solution to a multi-step problem are like the separate links in a chain. And the difficulty of such problems arises from the need to select and to link up the constituent steps into a single logical chain. Suppose pupils are faced with:

**Question: **“I’m thinking of a two-digit number *N *<100, which is divisible by three times the sum of its digits? How many such numbers are there?”

In Year 7 pupils may see no alternative to guessing, or to testing each “two digit number” in turn. But by Year 9 one would like some to respond to the trigger in the question

“three times the sum of its digits”

by gradually noticing some of the hidden stepping stones.

**Steps toward a solution **

- The number has to be a multiple of 3 (“divisible by
**three times**the sum

of its digits”). - Hence the sum of its digits must be a multiple of 3 (standard divisibility test).
- But then the number is divisible by 9 (“divisible by three times a multiple of 3”).

- And so the sum of its digits must be a multiple of 9 (standard divisibility test).

- So the number is divisible by 27 (“divisible by three times a multiple of 9”).

6. So we only have to check 27, 54, and 81. **QED **

The sequencing of the steps, and the connections between the steps, are part of the solution. In short, *basic routines become useful only insofar as sufficient time is devoted to making sure they can be linked together to solve more interesting (multi-step) problems. *

**2.3.6 **

Expecting pupils to select and to coordinate simple routines to *create *a chain of steps in order to solve simple multi-step problems should be part of mathematics teaching for all pupils. In contrast, recent efforts to improve the effectiveness of mathematics instruction in England have concentrated on:

- the teacher, textbook author, or examiner
*breaking up*each complex procedure into easy steps, and then concentrating on teaching and assessing the easy steps, or atomic outcomes (one-piece jigsaws), - monitoring centrally whether these atomic outcomes can be performed in
*isolation*, and - ignoring the fact that we have neglected the most demanding skill of all—namely that of
*integrating*the separate steps into an effective*multi-step procedure*.

The evidence from international studies confirms what should have been obvious: this reductionist process of de-constructing elementary mathematics into atomic parts, combined with central monitoring that rewards partial success, has distorted the way pupils and teachers perceive elementary mathematics in a most unfortunate way. Improved problem solving and more effective mathematics teaching depend on enhancing the skill of the teacher. In contrast, the policy of focusing on *targets *and *testing*, and our misplaced dependence on crude measures of “pupils’ progress”, have tended to undermine the authority, the professional judgement, and the perceived long-term responsibility of the teacher.

Solving problems is hard. Any system that uses targets and testing to exert pressure on schools soon discovers the awkward facts that assessment items that require pupils to link two or more steps

- have a high failure rate, and
- generate pupil responses whose profile is at odds with the contractual demands placed on those who design centrally administered tests.

Such problems are therefore deemed unsuitable, and the tests tend to concentrate on more manageable *one-step *routines (or break down longer questions into a pre-ordained sequence of one-step “subroutines”). As long as teachers are judged on test outcomes, and as long as unfamiliar, multi-step problems are largely excluded from the official tests, teachers will continue to conclude that “in the (short-term) interests of their pupils” they dare not waste time developing the only thing that matters in the long run—namely:

*to provide their pupils with the skills and attitudes they need for the **next **phase. *

In short, England has adopted an “improvement strategy” that guarantees neglect of the delicate art of solving multi-step problems, and that is therefore self-defeating. Central prescription, and political pressure to demonstrate relentless year-on-year improvement, have resulted in a national didactical blind spot, with curriculum objectives and assessment—and hence teaching—becoming atomised, so that pupils are only expected to handle “one piece jigsaws”. Exams have routinely broken down each problem into a succession of easy steps—in order to minimise the risk of failure, and to ease “follow through marking” for the examiner. Teachers have then concluded that the delicate art of *interlinking *simple steps can be safely ignored. And we have all pretended that

- candidates who can implement (most of) the constituent steps separately
- have thereby achieved mastery of the
*integrated*technique.

This is a delusion. The individual steps may be a starting point; but the power and challenge of elementary mathematics lies in learning *how simple ideas can be combined *to solve problems that would otherwise be beyond our powers. That is, the essence of the discipline lies not so much in the techniques themselves as in the *connections *between its ideas and methods. Hence the curriculum (and, where possible, its assessment) need to cultivate the ability to tackle *multi-step *problems without them being artificially broken down into steps.

A curriculum or syllabus can specify the individual techniques, or steps; but this is futile if one then forgets that it is the **linking **of the material which determines whether it can be effectively used to solve problems. This interlinking is an elusive property, *which depends entirely on the way the material is ***taught**: that is, it depends on the teacher. So we need a system in which teachers are free (nay, in which teachers feel professionally obliged) to value this activity in their classrooms, even though its value will only become apparent at *subsequent *stages—after their pupils have moved on to other classes.

There is a lot here. I think for me, one continual source of frustration is that opponents of *exercises* do not seem to have the faintest clue what exercises are even *for*, or perhaps more accurately, do not seem to understand how students *actually learn mathematics*.

This is mentioned above and is spot on. Tony goes on to say a lot of other important things as well, too much for one comment. But greatly appreciated.

If this author was to review any of the “common” 7 to 10 Mathematics textbooks available through most school suppliers… the pass rate would be zero.

Which is interesting, because any mathematics teacher knows the more “popular” books are either the one the school has always used (about 90% of cases) or the one with the “best” exercises (9% of cases).

The other 1% of schools seem to be giving up on textbooks and creating their own material. As I presume Universities have done since their inception.

Thank you Marty (and Tony Gardiner), mighty contributions!

All the credit to Tony.

This is probably way too late to be useful, but I have just rediscovered my own article “The art of problem solving” – in Tim Gowers’ wonderful “Princeton Compendium of Mathematics”. This focuses on mathematics at a higher level, but may still be of interest.

Huh. Somehow they left out Burkard’s and my balancing of wobbly tables.

Thanks very much, Tony. Amazing book. (Eventually, I’ll put a front page on this blog, so that people can actually find things.)

I’m not as high on this as you are, Marty.

1. It’s a little hard to read and process. I do appreciate the work and all. Just…still.

2. He doesn’t give enough very explicit examples to support his points.

3. Some assertions are not supported by evidence (just asserted). In particular that harder initial examples would lead to more success. If anything, Greg Ashman’s personal experience (and some theory) leads the other way. Of course harder exercises (problems, etc.) will usually have a lower success rate when we move to them, but has Gardiner really proved that having the ballbuster first as an example is better? Recall that just the technique itself is novel (TO THE NEOPHYTE).

4. At one point he asserts that kids in England are not getting enough meat and potatoes (easier problem drill) and then at another point, he says they do too much simple drill. Which is it? [And if there is some dialectic that allows him to have both sides of this, he sure doesn’t explain it clearly.]

5. Most textbooks I know have a range of exercises. Usually in order. There will be section 1 problems that are “simple plug and chug”. (Note this is not simple to the new learner…he may not have known how to do this sort of exercise AT ALL, before the example/lesson…after all, it is new content.) Section 2 problems will have a bit more multistep character to them. Also simple (and I mean simple) word problems belong here. And section 3 will be even harder (but should not be total IMO/Putnam level ballbusters). They may involve simple proofs/derivatrions or require using some technique from a previous lesson in combination with the current lesson. Or may be harder word problems. A good example of this breakdown into 1/2/3 type problems is Spiegel Applied Differential Equations (but it is common practice).

If anything Gardiner ought to take a few texts in the area he cares about and evaluate if they have sufficient drill at the level of difficulty he wants. (Or if they do, he should say how many problems the average kid should be required to work.)

You didn’t get what Gardiner said. Fine.

Possible that a more labored reading would clarify it. But in that case, it’s at least poor writing. I suspect that there’s more wrong than that, as per my earlier comment. Parsing it more isn’t worth it…so yes, fine.

Christ. Or maybe it’s poor reading.

Yeah, yeah I heard you the first time. I’m to dumb to process it.

Not buying it. The writing is confused, both in explication and in logic. And no fun to read.

A man believes what he wants to believe. I get it that he’s an ally. But that article is a mess.

Doesn’t matter if he’s fighting against Team Sauron. I call them how I see them.

Your unceasing knowitallness is beginning to get annoying.

Pax.

I had hoped to let it stand.

Anonymous is right in the sense that, when one puts something out there, it has to stand on its own. But rubbishing it (without giving details) only tells me that what is written does not immediately appeal: it is not serious “criticism”.

In case the point was missed: I have given my life to devising good “problems”, and showing what they can (and can’t) do. So I am all for using suitable problems. But there is something wrong with the way hyphenated “problem-solving” has been used to supplant “the mastery and effective use of ‘bread-and-potatoes’ techniques.

S/he may also be right in another sense. Maths education it is often too painfully blind and factious to “deconstruct” things in detail. (Why devote pages and pages to explaining why an approach is doomed, when all one wants to point out is that it *is* doomed?) So like him/her, I tend (i) to think as best I can, and then (ii) state what seems sensible for the benefit of those who might understand without too much “explanation”. If you like, my goal is to rally those who may be inclined to do what seems to be sensible, rather than to try to “convert” those who are not so inclined.

Where there are waverers, I am happy to engage and try to explain as best I can. But I am not a teacher, so am bound to miss some important things, and do not pretend to have bomb-proof answers. But I do what I can to check that I am not talking complete nonsense. (My 30 odd books of problems serve as examples from primary up to undergraduate and research level. And others have lent their classes and their expertise along the way.)

I appreciate the comments about style: mathematicians often find writing difficult. But I work quite hard at it. And, since what I am trying to say is often contentious, I am happy then to let time serve as judge. Looking back, I recognise that the position I have come to represent has not “won the day”; but, while it has remained largely a sideshow, it does seem to have gained a certain respect over the years (and has attracted almost no serious criticism).

“…rubbishing it (without giving details) only tells me that what is written does not immediately appeal…”

I had numbered points 2, 3, and 4. To which you responded in detail. Point 4 in particular is important. It wasn’t like there were zero details. I gave a few examples and stopped. So yes, appeal WAS an issue. But “zero” is not correct.

The issue with your article is that it was so hard to parse, I stopped reading after a certain point. And…writing a master’s thesis taking it apart (numbers going up to 25 or whatever, yikes) is an unreasonable expectation. You should be communicating to me. My mind turned off after the end of 2.3.2.

Now…maybe I am just a low IQ squidiot (although if so, what the hell am I doing tro…err…commenting on this blog). Or maybe the piece really is poor communication. (Possible? Bueller?) Somehow, I manage to consume other math ed articles on the net, to include on this blog (e.g. Marty’s article for the place with the carbon nanotube graphic), but also others that I’ve cited (like Escalante’s article on his program, like Hotelling on the role of statistics in university departments, like the three Cargal articles). Honest, I’d enjoy a piece EVEN IF I DISAGREED WITH IT (e.g. advocating higher difficulty, less graduation, slanting things for the top of the class, or if it didn’t advocate that…whatever the heck it does advocate). But that piece was just hard to engage with. The fact that I can’t even clearly state your thesis (with which to engage, agree/disagree) is a bug, not a feature.

Oh…and just in case you missed it:

“If anything Gardiner ought to take a few texts in the area he cares about and evaluate if they have sufficient drill at the level of difficulty he wants. (Or if they do, he should say how many problems [of what difficulty level, added] the average kid [or specific track, added] should be required to work.)

P.s. Just to piss Marty off (I do get banned a lot):

You are indeed pushing your luck. Knock it off.

OK, here goes (though I realise details are out of place here).

“2. He doesn’t give enough very explicit examples to support his points.”

I can see two kinds of “points”.

First examples of what students can/can’t do; second, examples of what might work better than what is done (in the UK) at present.

If I were addressing a single teacher, and had the experience to comment usefully, then I would be specific (highlighting things that several students in the class were doing that spelled *trouble*; and trying to think up examples of problems or approaches that might work better).

But the book emerged from our own version of your ACARA debacle. Our 1999 Curriculum was reasonable (and statutory) – but was ignored in favour of the (non-statutory) Numeracy Strategy; the 2007 Curriculum *ditched all the content* in favour of vague *processes* like (hyphenated) “problem-solving”.

A change of government in 2010 then led to a “battle for the curriculum” – with the usual factions. I set out to offer a critique of the usual “progressive” approach by showing

(a) what it had created; and (b) what should replace it.

For (a) I needed documented examples that applied across the board – and for this I used examples from TIMSS (of which there are lots included – with very striking data that are hard to refute).

For (b) I drafted a national curriculum on three levels: thumbnail; medium detail; full detail (which still needs fleshing out for school use). I had no experience to write a complete curriculum (I suspect no-one does), but I risked it as part of the debate. I am aware of no serious critique of what I devised, so I suspect I would write much the same today.

“3. Some assertions are not supported by evidence (just asserted). In particular that harder initial examples would lead to more success. If anything, Greg Ashman’s personal experience (and some theory) leads the other way. Of course harder exercises (problems, etc.) will usually have a lower success rate when we move to them, but has Gardiner really proved that having the ballbuster first as an example is better? Recall that just the technique itself is novel (TO THE NEOPHYTE).”

Anonymous overstates what I suspect I wrote (I doubt whether anything says “ballbuster first always works better”).

The challenge was to encourage the reader to reflect on the generally suppressed evidence of failure in 2(a) above. In place of the TIMSS examples, we prefer to quote our own, ever-improving, national test results – ignoring the fact that *internal* assessment can be dumbed down, either at the level of what is asked, or at the level of how it is marked – or both.

I agree with Anonymous that it makes sense for an exercise to start simple and work up; or for a textbook to start with prerequisites, and build.

However, this only works if one is committed to *reaching the intended goal* (the hard stuff towards which one is building: e.g. the arithmetic of fractions; or algebra; or proof; or integration; or whatever). I risked the thought (having checked it out as widely as I could, nationally and internationally), that English failure was linked to our peculiar “bottom-up” mentality, that (very sensibly) started from “where the students were”, but that then failed to follow through. (A current example might be our use of “bar models”: these are wonderful as a stepping stone between number/arithmetic and *algebra*. But teachers have discovered that bar models often suffice for students to get answers to our low level exam questions – so “Who needs algebra?”: bar models have become an endpoint-in-themselves, rather than a means to a higher end.)

How could one convey the importance of holding on to the ultimate goal, rather than being “kind” and stopping halfway? I chose to highlight the example of what I called the “Japanese lesson” – never intending to convey that this was how *all* Japanese lesson proceeded – where the initial focus of the less is on a complex, but accessible, problem whose exploration would bring out the intended mathematics. After solving the problem together and highlighting the intended method, the class exercises that follow begin with much easier applications of the key ideas. This was intended to be suggestive – not prescriptive. But maybe I never explained very well.

“4. At one point he asserts that kids in England are not getting enough meat and potatoes (easier problem drill) and then at another point, he says they do too much simple drill. Which is it?”

Good question (if not very sympathetically expressed).

It all has something to do with the distinction made here in the UK between “product” and “process”. The progressives claim that “process” is what matters, and conveniently forget that it depends on “product” (i.e. technique): if I have to worry about the answer to 7×9 or do not recognise 45/60, then I have less bandwidth available for higher things.