The 2019 TIMSS results are just about to be released, and the question is should we care? The answer is “Hell yes”.

TIMSS is an international maths and science test, given at the end of year 4 and year 8 (in October in the Southern Hemisphere). Unlike PISA, which, as we have noted, is a Pisa crap, TIMSS tests mathematics. TIMSS has some wordy scenario problems, but TIMSS also tests straight arithmetic and algebra, in a manner that PISA smugly and idiotically rejects.

The best guide to what TIMSS is testing, and to what Australian students don’t know and can’t do, are the released 2011 test items and country-by-country results, here and here. We’ll leave it for now for others to explore and to comment. Later, we’ll update the post with sample items, and once the 2019 results have appeared.

**UPDATE (08/12/20)**

The report is out, with the ACER summary here, and the full report can be downloaded from here. The suggestion is that Australia’s year 8 (but not year 4) maths results have improved significantly from the (appalling) results of 2015 and earlier. If so, that is good, and very surprising.

For now, we’ll take the results at face value. We’ll update if (an attempt at) reading the report sheds any light.

**FURTHER UPDATE (08/12/20)**

OK, it starts to become clear. Table 9.5 on page 19 of the Australian Highlights indicates that year 8 maths in NSW improved dramatically from 2015, while the rest of the country stood still. This is consistent with our view of NSW as an educational Switzerland, to which everyone should flee. We’re not sure why NSW improved, and there’s plenty to try to figure out, but the mystery of “Australia’s” dramatic improvement in year 8 maths appears to be solved.

**UPDATE (09/12/20)**

OK, no one is biting on the questions, so we’ll add a couple teasers. Here are the first two released mathematics questions from the 2011 year 8 TIMSS test:

**1. **Ann and Jenny divide 560 zeds between them. If Jenny gets 3/8 of the money, how many zeds will Ann get?

**2.**

(The second question is multiple choice, with options 0.043, 0.1043, 0.403 and 0.43.)

To see the percentage of finishing year 8 students from each country who got these questions correct, you’ll have to go the document (pp 1-3).

Correct me if I’m wrong, but isn’t the average number of points defined to be 500 in each study? This means that “improvement” must be interpreted as follows: year 8 math in other countries has deteriorated a lot faster than down under.

Thanks, Franz. I don’t know, and it is a very good question. I haven’t had a chance to look at the full report, but will try to figure that out. I’m willing to accept as a first approximation that improved score implies improvement, but of course nothing needs to be taken on face value. That’s why the 2011 released items are interesting: they indicate for each question and for each country the percentage of students who were correct.

OK Marty, I’ll bite…

The bar seems to be set quite low and these “Year 8 standard” questions I would hope should be manageable for primary school students.

That said, they are (in my opinion) proper mathematical questions, unlike what appears on Year 7 NAPLAN.

Thanks, RF. Yes, the two questions I put up there are not hard: did you check how well students performed?

Nonetheless, as you write, the questions are properly mathematical, unlike NAPLAN (which I should have slapped above) and PISA. The questions get harder, and the year 8 questions get decently and nicely algebraic.

So, in other words, they are testing mathematics and not “numeracy”. (I’m still not totally sure what people mean when they say “numeracy” btw… but that isn’t your fault)

Yes. And did you check the link i gave to see how well Australians did on that mathematics test?

I did. The “Chinese Taipei – CHN” also caused me to raise an eyebrow, but for a totally different reason.

There is something not happening in schools, clearly, and it is across multiple countries with similar school structures (GBR, NZ). I can’t see it changing any time soon.

We have all heard the mantra that “correlation is not causation” and nod in agreement when we hear it – as to what this means is another matter.

However, in many domains (such as education and public health) the data presented are essentially correlational in nature but presented as if they should be interpreted as causal.

Good night.

Well, yes. But there’s a flipside. The unwillingness to believe one’s own eyes can lead to spending thousands of dollars and thousands of hours proving that this feathered, quacking, waddling thing in front of us is indeed a duck. Or, which is the peculiar charm of maths ed, proving it’s not a duck.

Following on from your (Marty’s) much earlier question about the value of research in mathematics education, I have been wondering: Have there been any improvements in school education in the last 100 years? If so, what are they? And how has research in education influenced these improvements?