The following WitCH is from VCE Mathematical Methods Exam 2, 2009. (Yeah, it’s a bit old, but the question was raised recently in a tutorial, so it’s obviously not too old.) It is a multiple choice question: The Examiners’ Report indicates that just over half of the students gave the correct answer of B. The Report also gives a brief indication of how the problem was to be approached:
Have fun.
Update (02/09/19)
Though undeniably weird and clunky, this question clearly annoys commenters less than me. And, it’s true that I am probably more annoyed by what the question symbolises than the question itself. In any case, the discussion below, and John’s final comment/question in particular, clarified things for me somewhat. So, as a rounding off of the post, here is an extended answer to John’s question.
Underlying my concern with the exam question is the use of “solve” to describe guessing/buttoning the solution to the (transcendental) equation . John then questions whether I would similarly object to the “solving” of a quintic equation that happens to have nice roots. It is a very good question.
First of all, to strengthen John’s point, the same argument can also be made for the school “solving” of cubic and quartic equations. Yes, there are formulae for these (as the Evil Mathologer covered in his latest video), but school students never use these formulae and typically don’t know they exist. So, the existence of these formulae is irrelevant for the issue at hand.
I’m not a fan of polynomial guessing games, but I accept that such games are standard and that “solve” is used to describe such games. Underlying these games, however, are the integer/rational root theorems (which the EM has also covered), which promise that an integer/rational coefficient polynomial has only finitely many candidate roots, and that these roots are easily enumerated. (Yes, these theorems may be a less or more explicit part of the game, but they are there and they affect the game, if only semi-consciously.) By contrast, there is typically no expectation that a transcendental equation will have somehow simple solutions, nor is there typically any method of determining candidate solutions.
I find something generally unnerving about the exam question and, in particular, the Report. It exemplifies a dilution of language which is at least confusing, and I’d suggest is actively destructive. At its weakest, “solve” means “find the solutions to”, and anything is fair game. This usage, however, loses any connotation of “solve” meaning to somehow figure out the way the equation works, to determine why the solutions are what they are. This is a huge loss.
True, the investigation of equations can continue independent of the cheapening of a particular word, but the reality is that it does not. Of course, in this manner the Solve button on CAS is the nuclear bomb that wipes out all intelligent life. The end result is a double-barrelled destruction of the way students are taught to approach an equation. First, students are taught that all that matters about an equation are the solutions. They are trained to give the barest lip service to analysing an equation, to investigating if the equation can be attacked in a meaningful mathematical manner. Secondly, the students are taught that that there is no distinction between a precise solution and an approximation, a bunch of meaningless decimals spat out by a machine.
So, yes, the exam question above can be considered just another poorly constructed question. But the weird and “What the Hell” incorporation of a transcendental equation with an exact solution that students were supposedly meant to “solve” is emblematic of a an impoverishment of language and of mathematics that the CAS-infatuated VCAA has turned into an art form.
Side note before delving into the problem more deeply – this concept is taught by many as a formula, rather than a concept which is only emphasised by the examiners’ comments.
Number 8, granted it’s off the point, but I’m not sure what you mean. You mean the formula for the average is commonly taught without justification for the formula? (Which, yes, is all the question requires.)
Correct. Many students, in my experience are taught that average value has a formula and to learn it. A methods paper in 2010(ish) may have tested the idea of a rectangle having the same area as an integral and asked for the height, but beyond this, it seems to be very commonly assessed as just a formula.
The word average is fraught with all sorts of strange problems in high school mathematics (and to real estate agents but that is another thing altogether)
Within the context of the Methods course, the average value of a function is given by a very specific formula and no justification is explicitly required. So, given that the question is within *this* context, the crap is not in the meaning of average value itself or the formula.
By calculating the (simple) integral as ln(2k+1)/(2k) and comparing this with ln(7)/6, the question is obviously cooked so that the (only) solution is k = 3 by inspection. So for me, making it a CAS-active question (and multiple choice to boot!) is the crap here. Because it’s just another brainless push the buttons question (which is what the Report encourages).
I think it’s worth remarking that in the limit k –> 0, the average value approaches 1, which is what you’d expect.
So is the crap due to the lack of quality in the distractors A, C, D and E as well as the contrived nature of the question?
Yeah, I was wondering what mistakes could you possibly make that would lead to *any* of the wrong options. The wrong options are definitely crap, but that’s because it’s a really dumb question to have in Section A of Exam 2. At the risk of over-looking some even more obvious crap, the question would be much more appropriate in Exam 1.
Thank you, RF and JF, for your comments. There is, as you have both noted, the stark clunkiness of the question. But there is something else there, and I think this is another one which, in that other aspect, would annoy the shit out of mathematicians but not teachers.
Ignoring the clunkiness of the question, teachers would presumably tend to think the question is difficult for students to get wrong (although 48% did …), or at least be unfairly misled, and so it’s at worst a no harm no foul thing. Mathematicians, however, and rightly or wrongly, would more likely be fixated on the foul.
Well, k = 3 is the only answer, so no troubles with the implication in the question that there’s only one value.
The function is well-defined, so no trouble there.
Is it a matter of definition: Mathematicans define the average value to be over the domain, not a subset of the domain …? I wouldn’t have thought so. So it’s OK to have an average value over the given interval (the function is continuous over the interval), so no trouble there.
The average value is clearly defined in Maths Methods, so there can’t be a quibble based on using that formula.
The unique answer is obvious by inspection, so no troubles with solutions beyond the scope of the course (and a CAS gives an approximate answer of exactly 3 anyway, so no need to even know how to integrate or solve something by inspection ….)
Square brackets are OK.
The wrong options are weak, but that’s not a mathematical error (in fact, the trouble is that none of the wrong options seem to arise from a good error!).
I see no obvious foul, the only thing that annoys me is that the question is on Exam 2 and is a multiple choice question.
JF, I agree with all you write. As I indicated, this one is like the error in WitCH 10. It annoys me, and I’m willing to bet it would annoy mathematicians in general, and I *wish* that it annoyed teachers, but I’m not surprised that it doesn’t.
OK… be gentle with me everyone… I’m going to ask why the question defines the function over a different domain to that which the average value is based on. Could they not have said f: [0,k]->R, f(x)= and thus avoided a lot of issues?
Thanks, Number 8. The question in that sense is ok.
JF, I found your comment above in the trash can. Not sure why, so apologies if I somehow deleted it by mistake.
Marty, I put my comment in the trash can after realising I’d misread an earlier comment. No apology needed – please return it to the bin where it belongs.
Is there a difference here between log and Log? My (growing distant) memories of Complex Analysis are coming back looking at this – my recollection is that when spelt with a capital it is a single valued function, but spelt lowercase, the complex arguments come into play.
That’s the general rule, but not the issue I’m thinking of.
To think about this in a (hopefully) less VCAA examiner way… is the issue in the question or in the suggested solution? I have an idea of what you might be thinking of if the issue is in the VCAA intended solution.
Hi RF. The issue is in the question. As I suggested, it’s similar in mathematician-annoyingness to WitCH 10. Not as bad as that one, but similar.
Is the question improved by changing the last part to: “the value of k could be”… ?
Better and worse. There is a unique solution, so “could be” would be a little weird. But I think you’re getting to it.
OK, I’ll bet it’s the potential ambiguity:
“… f(x) = 1/(2x+1) over the interval [0, k] …”
instead of the better worded
Over the interval [0, k], the average value of the function …..
(If it is, it doesn’t annoy me greatly. Not nearly as much as the mere presence of the question on Exam 2)
No, not that. Note the title.
So, are you saying that if you work through the problem, it requires solving sqrt(2k+1) = 7^(k/6) for k?
Hence the use of “power” and “solve”?
Well, that’s noting the title *too* much. But, since we’re here, what would you do with the equation you created?
First I would look for a simple integer solution, starting by testing 0 or 1 (not for this question, but for questions of that nature). Failing that, I would try some form of numerical approximation, unless a calculator was available to do this for me.
Thanks, RF. Can one solve the equation? And, a prior and non-rhetorical question: what does “solve” mean?
“Solve” to me has (in a highschool context) always meant “find the value of the pronumeral which makes the statement true” however once literal equations are encountered, it is perhaps more appropriate to say, “transpose” rather than “solve” although perhaps the two are used interchangeably?
In VCAA paper 2 exams, “Solve” is a function on the calculator more often than not.
Do the integration: 1/(2k) Ln(2k+1).
Equate to the given mean: 1/(2k) Ln(2k+1) = 1/6 Ln(7).
Simplify: 1/k Ln(2k+1) = 1/3 Ln(7).
Require simultaneous solution to 1/k = 1/3 and 2k+1 = 7. By inspection the solution is clearly and trivially k = 3.
No powerful solvent required here.
But a powerful solvent means strong dissolving …. And by putting this question on Exam 2 it is certainly a strong dis to solving.
Thanks, JF. Would you solve 1/k log(3k + 1) = 1/3 log(7) the same way?
If your question was in Exam 2, both I and students would press some buttons (or write some Mathematica code) and get 3.92884 (correct to five decimal places). The wording of the question would need to change:
“The value of k is *closest* to”
My complaint would still be that the question is pointless in Exam 2 because CAS reduces it to triviality. At least average value can be used in Exam 1 as a context for testing integration skills.
The VCAA question has clearly been cooked so that the solution is readily obtained by inspection in the way that I posted. Although this method fails for your example, I don’t think that failure invalidates its use for the ‘cooked’ VCAA question (although I don’t see the point of ‘cooking’ questions for Exam 2). And if the VCAA question is solved using a CAS then my method is moot.
(As an aside, there is undoubtedly some fancy use of the Lambert W-function that would give exact solutions).
Thanks, JF. Yes, the solution can readily be obtained by inspection. And, it can only be obtained by inspection. You’re not bothered by that, I guess because of the “readily”: students are not likely to go down a rabbit hole, and there’s no evidence that students did. I’m much less comfortable with the question, and the Report.
The word “Solve” in the Examiners Report should be decoded as “Use a CAS to solve the following equation”, which is what most if not all students would have done in one form or another (some may have evaluated the integral first, with the CAS, and then solved, with the CAS. In which case the only thing wrong with the question is that, mathematically, it’s utterly pointless and the crap here is the very presence of the question.
The fact that the exact solution can only be obtained by inspection doesn’t bother me, except in the sense that I don’t see the point in cooking the question so that it *can* be done by inspection (after all, it’s a CAS exam, for better or for worse).
Thanks, JF and RF. The word “solve” appears to make me more uncomfortable than you guys. For me, giving a fundamentally unsolvable (transcendental) equation that happens to have an exact solution, is weird and disorienting. Anyway, there’s not more there (except the general clunkiness), and I’ll update soon.
OK, my turn to ask a question, Marty. A quintic equation is generally unsolvable, so would you have the same unease towards the following question:
Solve x^5 – x^4 + x^3 – x^2 + 2x – 2 = 0.
By the way, although I (and others) have been using the word ‘solve’, the question doesn’t explicitly use this word. It just asks for an option that gives the value of k. So I’m guessing it’s the comment in the Examiners Report that causes your unease. But again, for better or for worse it’s a CAS exam – the assumption is that students are going to ‘solve’ the equation by pressing buttons, that is, essentially get an answer using a numerical method that happens to be exact rather than a decimal approximation. I can better understand your unease if it was a ‘Solve’ question in Exam 1, but then again, would that unease extend to the above question?
I also note that there are many questions (eg. set in the context of drug dosages) with models of the form x = t e^t and questions such as how long before x < …. are asked. These equations are also fundamentally unsolvable but the expectation is that a decimal approximation is found using a CAS, and I don't think anyone has too much unease about this. (Such questions can be solved using the Lambert W-function, but I guess it opens up a new can of worms if equations are solved exactly by defining special functions to do the job ….)
Thanks, John. It’s a very good question. I was writing a reply, but it seemed to end up sufficiently long to make it the rounding-off update to the post, which I’ll do now.
Briefly on using the W-function or whatever to “solve” the equation, it is standard in mathematics to invent a new function so as to somehow capture the “solutions” of an equation. It is unclear, however, whether and when one wants to call this solving. For example, in what sense can the equation x3 = 2 be solved? (Feel free to continue the discussion, but I’ll try to put up a dedicated post to this question.)
Decrappers,
Clearly as JF mentions you don’t need a CAS enabled root extractor to “guess” a solution for k in this example.
So I tried plugging in sin(kx) = x for k close to 1 into the solve X root finding algorithm on the CX
And it came back with X= 0 and the helpful? hint that there may be other solutions.
Wolfram Alpha was better giving the other solutions without having to provide ranges
That said I was taught to use Newton’s iterative method for ‘guessing ‘ roots of polynomials etc which mainly converges quickly provided initial guess is close and is intuitive .
https://en.m.wikipedia.org/wiki/Newton%27s_method
Steve R
Thanks, Steve. That seems pretty weird and unsolvy. Just to clarify, what does the CX do if you try to get it to solve sin(9x/10) = x? What about sin(11x/10) = x?
A simple graph also helps:
For sin(kx) = x and 0 < k < 1, there is clearly only one intersection point of the line y = x and the curve y = sin(kx) (since the gradient of y = x is 1 and the gradient m of y = sin(kx) is 0 < m < k < 1 over -pi/2 < x < pi/2). And that intersection point is clearly (0, 0).
So the unique solution to sin(kx) = x, where 0 < k 1, there are clearly more solutions than just x = 0 (and the number of solutions depends on the value of k).
Not sure what happened to the above comment – bits and pieces are missing (and the edit option seems to be playing up – using inequality symbols seem to be causing the trouble so I’ve substituted lt and gt for them). The original comment (with the substitution) was:
A simple graph also helps:
For sin(kx) = x and 0 lt k lt 1, there is clearly only one intersection point of the line y = x and the curve y = sin(kx) (since the gradient of y = x is 1 and the gradient m of y = sin(kx) is 0 lt m < k lt 1 over -pi/2 lt x lt pi/2). And that intersection point is clearly (0, 0). So the unique solution to sin(kx) = x, where 0 lt k lt 1 is x = 0. (And *ahem* does solving this equation make anyone uncomfortable?).
For sin(kx) = x and k gt 1, there are clearly more solutions than just x = 0 (and the number of solutions depends on the value of k).
JF,
Yes ! graphing/sketching the ‘ squashed sin ‘ function on your tablet ,whiteboard,calculator ,envelope … against y=x quickly Shows there should be 3 intersections for k> 1 and only one for 0 <= k<= 1.
I picked the equation to test the functionality of solve X on the CX (and WA )which appears to be root finding Algorithm in Mathematica. I was wondering if it used a Taylor's series to approximate the solution close to 1 (similar to the approximation mentioned in link below) or more likely an iterative approach using Newton's method or …?
https://en.m.wikipedia.org/wiki/Transcendental_equation
Steve R
Hi,
For k =1.1 then sin(1.1x) = x into WA gives roots 0 & +- 0.680897 and for 0< k < 1 root will be 0
On my 4 year old CX solve( sin(1.1x)=X,X) only gives 0 with the warning of other solutions
Steve R
Thanks, Steve. Nice to have a warning. Does it say anything for sin(0.9x)?
Ahh yes. That was the other thing that disappeared from my earlier comment: Does the CX still give the warning that there may be other solutions when 0 lt k lt 1 ….? The TI-89* does (and it gives the same warning for both Arcsin(x) = x and Arcsin(x) = 1.3x).
*The TI-89 is where I drew the line in the stupid CAS calculator arms race, no TI-Insipid for me (although I had to work with it with students).
This is going off on a tangent to a tangent, but since the issue of mindlessly using CAS to solve equations has been broached, I would be interested to read what others (especially users of the TI handheld) think of Question 5(c)(ii) in the 2017 Specialist Exam 2…. The examiners report states that only 15% of students answered correctly, and apparently “incorrect answers involving other locally minimum values were frequent”.
Hmm. Is this a topic for another WitCH?
My complaint is not very deep. The issue with the question is that if students use the TI to (attempt to) find the minimum distance by solving for when the derivative of the distance function is zero, the TI finds a handful of stationary points, but not all of them. And, unfortunately for students, the missing stationary point turns out to be the minimum distance!
Um, it may not be very deep, but it sounds very fucked up. I try to stay out of CAS swamps, but what were students supposed to do? (Also, does the stray square root soon in the report on 5(c)(i) mean anything?)
Yes, clearly the question was poorly vetted.
Two CAS approaches to get the answer: (1) use the “fmin” command to find the time at which the distance is minimised; (2) enter the equation on a graph page and use the graph minimum tool.
If solving for when the derivative of the distance function is zero, CAS gives a warning that additional solutions may exist. It would be consistent with VCAA’s arrogance to proclaim that students should use alternative methods to verify an answer if their technology gives them this warning.
I have very mixed feelings about this.
On the one hand, it’s clear that students have been set up for failure with this question.
But on the other hand, I would hope (and expect) that students have been taught to “use alternative methods to verify an answer if their technology gives them this warning”. This would probably happen with experienced teachers and/or teachers highly familiar and comfortable with the CAS calculator, but I suspect it would often be over-looked in the classroom. It’s a good example of where a teacher (and students) can learn a lot from an Examination Report in terms of preparing for the exam. Sanctimonious advice can nevertheless still be valuable advice to students and teachers in later years.
When ‘solving’ complicated equations (particularly like this one) I always try to encourage students to draw a graph (easily done with a CAS) to get an idea of the expected solution. It’s frustrating that you need a list of ‘CAS traps for young players’ with salient examples illustrating those traps as part of your teaching repertoire. It’s meant to be a *maths* examination.
Alternatively, maybe don’t set bullshit exam questions that randomly punish different machines and valid approaches.
Indeed. But it will be a cold day in hell before that happens. Last I heard, the exams were meant to be vetted using the common types of CAS calculators so that the questions are ‘calculator neutral’. However, I don’t see much evidence of this happening. There was a question some time ago (I can’t remember when) on the Methods Exam 2 where the equation was easily ‘solved’ using the TI but the Casio froze. There was a lot of angst among students and teachers who use the Casio.
It’s going to be very interesting this year in the Specialist exam 2 because there are several schools whose students will be using Mathematica rather than a CAS calculator. All students will sit the same exam. Having taught with both CAS calculators and Mathematica, I can confidently say that students using Mathematica will have an advantage. And students with an aptitude for coding will have a huge advantage. As far as I can read the tea-leaves, Mathematica will not be the compulsory CAS for at least another couple of years, so that advantage is going to remain (one could speculate that this is a deliberate ploy by VCAA to get all schools on board with Mathematica ….) But now we’re getting off-topic.
“There was a question some time ago (I can’t remember when) on the Methods Exam 2 where the equation was easily ‘solved’ using the TI but the Casio froze. There was a lot of angst among students and teachers who use the Casio.”
John, I believe the question you’re referring to is Q3(e) on the [2010 Methods Exam 2](https://www.vcaa.vic.edu.au/Documents/exams/mathematics/2010mmcas2-w.pdf). The CAS (whichever company) technology can sometimes get caught up in convoluted trig expressions etc, wnich causes a freeze when trying to solve an equation such as that required in the question.
Clearly not vetted correctly…
Delving into your discussions gives me a great hint.
Greatly appreciated (despite that it was a witch almost half year ago)
Inspired by all of you, gentlemen, I will make a Youtube video demonstrating both TI and Casio CAS uses on 2010 Math Method Exam 2 (particularly for that question) to help more kids avoid these kind of problems.
I intend include:
– when or when not to use tcollect/texpand/simplify
– how to make the ugly expressions more beautiful
– if stuck, how greater maths knowledge and content (beyond MAM syllabus) would save you.
Quote of the day from expert JF:
“Sanctimonious advice can nevertheless still be valuable advice to students and teachers in later years.”
P.N.
Thanks, P.N. I don’t think I’ve written on that exam question. Should I? As for teaching students how to navigate Machine A or Machine B, yes, this is absolutely essential, and pedagogical lunacy.
https://www.heraldsun.com.au/news/victoria/answers-just-didnt-add-up-on-year-12-maths-methods-exam/news-story/fcdf73d3cb1f17e82978573a6208b116
Actually as reported in the press, those students got those questions remarked.
Hi P.N. Yes. And rightly so.
But that doesn’t take into account the psychological impact those questions had on these students during the exam – who knows how badly they performed on other questions as a result of this screw-up by VCAA.
My criticism has been consistent for years:
I do not blame the exam writers – it’s not easy writing exams. I blame the idiots who are meant to be vetting these exams.
Not a year goes by without non-trivial screw-ups on at least one of the maths exams. The reason for this is simple – incompetent vetting by a kakistocracy of imbeciles.
Indeed, JF. Last year the same thing happened, where the wrong axis was stated (horizontal axis instead of vertical axis or vice versa).
I couldn’t believe that the exam had gone through a writing process, solution writing process and vetting process and none of those revealed the error! The invigilators looked quite embarrassed at our school!
Yes it is hard writing exams, I’ve tried doing that myself and it’s a challenging gig. However errors are fine, as long as they are identified and corrected before printing en-masse.
Steve,
Very fair points made.
I heard that on last year’s training day, the MM2 author was isolated by seventish assessors for a number of mistakes made in the exam papers.
Probably he didn’t meant to make those mistakes, or he was too busy with other school job aspects, such as coordinating certain year levels, completing reports, camps, time-tabling, etc.
I felt quite astonished because the very first thing MM kids did last year was not commencing their paper, but crossing out “horizontal” with a word “vertical” : )
I asked some students I taught last year, they were unhappy with that paper too.
That’s why we need more down-to-earth people in our education system, with appropriate, sound knowledge to teach the kids with righteousness and good ethics.
In fact, I strongly insist that writing and vetting teams should be working as a whole to the best of their ability (I strongly doubt the latter one). It turns out, just have five people in the team, doesn’t mean we have five layers of safety in the 100% accuracy with the contents…
Re: “I heard that on last year’s training day, the MM2 author was isolated by seventish assessors for a number of mistakes made in the exam papers.”
I actually feel sorry for the author. It’s the vetters that should be drawn-and-quartered.
Re: “It turns out, just have five people in the team, doesn’t mean we have five layers of safety in the 100% accuracy with the contents”
Even the MAV have a team of more than 5 (2 writers, 3 vetters and 2 blind reviewers). And that’s for a Trial exam!!
Re JF:
“(2014 and 2015 error free? Now there’s a challenge ….)”
JF, notice that I used quotation marks on “error-free”.
Honestly, I really couldn’t spot any apparent mistakes in 2014/2015 Spesh exams. And they are quite good
(you may reckon these questions are just a piece of cake of elementary mathematics)
Couple of years ago, I encountered the author of 2015 Spesh exam in person on a PD.
He is an old gentleman, now retired, with phd degree. He gave us some useful resource for the implementation of 2016-2018 back then (now you can see it has been extended to 2021, laughably.)
“Even MAV trial exams have 2 authors, 3 vetters, 2 reviewers”
Really? I though there are only four people. That’s more than I expected.
I dare to say only someone like you could generate good in-depth questions at the very start.
In my biased opinion, many times, one >= five. But that one must play vital part in writing a good paper in one-go.
If ALL authors endeavoured to check extremely rigorously and thoroughly, just as you did, then we (all math teachers in the state) will really be fuss-free.
Just a side story (regarding some inconsistencies in assessments and study design), I need to mention one of your reputable old colleague (he told me you and him had once worked together in Monash). I was informed by him: for the 2016 Spesh Exam 2, Q3a, if the candidate used “desolve” or “dSolve” to tackle that DE, maximum 2 out of three was paid…
That was disappointing, diminishing the students good understanding and skills in the use of technologies when solving modelling problems.
P.N.
Hi P.N.
Re: Side story. “For the 2016 Spesh Exam 2, Q3a, if the candidate used “desolve” or “dSolve” to tackle that DE, maximum 2 out of three was paid… That was disappointing, diminishing the students good understanding and skills in the use of technologies when solving modelling problems.”
Yes, this is the insanity of a technology-based exam. It is utter insanity to have a technology-based exam with questions that penalise a student for using the technology.
You are forced to read the tea-leaves (provided you know how):
3 marks therefore an answer (1 mark) and appropriate working (2 marks) are required.
Appropriate working:
1 mark) Stating an appropriate boundary condition that can be used for solving the DE (I’m guessing here. The marking scheme is top secret – only Assessors are allowed to know it. Schmucks like you and I have to guess).
1 mark) VCAA have consistently said that calculator syntax is not acceptable as working so the only choice is to solve the DE ‘by hand’.
So in the context of the question being worth 3 marks, technology can’t be used (but it can be used to check the final answer). Solve + 3 marks is code for doing it without technology.
It is total insanity that teachers have to teach students this sort of ‘exam technique’. An interesting ambiguity is what should a student do if this sort of question is only worth 2 marks ….
Pre-2016 the intent of the question would have been made explicit with the command “Use calculus to ….” but such commands have disappeared – maybe explicitly stating that technology must not be used in a technology-based exam was too embarrassing for VCAA . Maybe VCAA decided it was a better look to hide behind an implication.
P.S. I’m not sure who this ‘reputable colleague’ of mine was (I’m not sure I even have/had reputable colleagues at Monash) – maybe you could let me know through other channels.
JF,
If you look at the published version of 2019MM2 pdf on VCAA website,
the word “vertical” is there, no more “horizontal”. Finally vetted, aftermaths.
People (including me) get so used to it each year.
2011 MM2 (0, -1) labeled as (-1, 0)
2013 SM2 all options were accepted
2014 “error free”
2015 “error free”
2016 as JF and Dr Marty indicated long time ago,
the “approximately normalised Erlang PDF”
2017 “derivative at endpoint”, and “infinite expressions of b in terms of a”
2018 “kgm/s^2 for change in momentum”
2019 “vertical” or “horizontal”
If there wasn’t any typo, errors, mistakes, flaws, etc., I would feel quite surprised.
By the way, JF, your literacy level is enormous!
Can’t believe that an expert math teacher masters a dictionary in chest.
I didn’t know the last two words you used, so I looked into dictionary:
Kakistocracy: “a government system operated by a bunch of badass”
Imbeciles: “used to describe people with considerably weak intelligence quotient level”
Hi, P.N. Just so I’m clear, can you indicate which questions were flawed in 2011, 2013 and 2017?
Marty,
2011 MM2 ER Q4, the point (0, -1) was incorrectly labeled as (-1,0).
2013 SM2 MCQ Q6. The most complete set of angle theta should be the combination of options D and E. All options were accepted that year.
2017 MM1 last question, the function was initially defined on domain [0,1], where endpoint derivatives don’t exist, but then for the last part of that question, a tangent line was needed at x=1.
This conceptual flaw was heavily criticized by many students and experiences teachers.
Thanks, P.N. In general, if there are questions you think deserve whacking, I’m open to suggestions. On the ones you’ve mentioned here:
*) 2013: Jesus H. Christ. How did I not know about this one? I’ll post on it soon.
*) 2017: Yes this was horrible. I wrote about it here.
*) 2010 Casio/TI: nothing much to say, except “idiots”. I think there was at least one more recent case of brand discrepancy.
*) 2019: Obviously it shouldn’t occur, but given the error was caught before the exam, I’m less fussed than some others here. There’s something distasteful, however, about airbrushing the history (ditto 2011).
*) 2011: I didn’t know about this one. I wonder if the error was caught before the exam. If not, I wonder to the extent it caused confusion. (Given the diagram, it’s not clear of the effect.)
VCAA’s air-brushing of the history is beyond distasteful, it is disgraceful and deceitful. They are NOT the exams that students sat. And not even an ‘Amended [insert date]’ ….
Par for the course from the masters of doublespeak and revisionism.
(2014 and 2015 error free? Now there’s a challenge ….)
P.N. – Thanks for pointing these things out.
For those people still filling out their VCAA error bingo card: 2014 MM2