We haven’t paid all that much attention to the California Mathematics Framework, except for noting Jo Boaler (and Keith Devlin) making an idiot of herself again (ditto Devlin). We’re too busy with the local clowns. Greg Ashman, however, has noted a remarkable new front in the war over CMF, and it is worth highlighting.
High school students and parents deserve transparent, honest information about math skills required to earn STEM degrees — including in data science. It cannot be gimmickry with courses that suffice for college admission but leave students mathematically unprepared for their desired goals at a higher-education institution.
The California Math Framework (CMF) must be transparent and accurate in the guidance it provides on college preparation. As Director of Undergraduate Studies for Math at Stanford since 2013, I felt a responsibility to look into these matters.
As well as providing links to commentary critical of the CMF (here, here and here), Conrad has posted two articles of his own. Conrad’s first article, written with mathematicians Rafe Mazzeo and Patrick Callahan, is titled Key Mathematical Ideas to Promote Student Success in Introductory University Courses in Quantitative Fields. The document is pretty much what it advertises to be. The authors surveyed university colleagues in STEM fields, asking their opinions of the key mathematical ideas to be taught in school in order to “promote student success in introductory university courses in quantitative fields”. It is a simple, clear and useful document.
The second article posted by Conrad is astonishing. It is titled Citation Misrepresentation in the California Math Framework. Conrad introduces the document as follows:
The current draft of the CMF is a 900+ page document that is the outcome of an 11-month revision by a 5-person writing team supervised by a 20-person oversight team. As a hefty document with a large number of citations, it gives the impression of being a well-researched and evidence-based proposal. Unfortunately, this impression is incorrect.
I read the entire CMF, as well as many of the papers cited within it. The CMF contains false or misleading descriptions of many citations from the literature in neuroscience, acceleration, de-tracking, assessments, and more. (I consulted with three experts in neuroscience about the papers in that field which seemed to be used in the CMF in a concerning way.) Often the original papers arrive at conclusions opposite those claimed in the CMF.
In his article, Conrad then documents the many citation misrepresentations he claims the CMF contains, together with critiquing blatant nonsense. It is amazing work, a powerful calling of CMF’s scholarly bluff.
Brian Conrad has written a post-mortem article for The Atlantic (semi-paywalled).