PoSWW 20: Unconventional Wisdom

This one comes courtesy of frequent commenter, John Friend. It is an example from Cambridge’s Mathematical Methods 34.

UPDATE (19/08/21)

It amazes me at times what does and does not concern some commenters. That’s not intended as a criticism. Well, it is, but it isn’t. And, it is. It’s complicated.

Continue reading “PoSWW 20: Unconventional Wisdom”

Cicchetti’s Random Shit

Readers will be aware that Trump and his MAGA goons have been pretending that Joe Biden stole the US election. They’ve been counting on the corruptness of sufficient judges and election officials for their fantasy grievances to gain traction. So far, however, and this was no gimme, the authorities have, in the main, been unwilling to deny reality.

The latest denial of the denial of reality came yesterday, with the Supreme Court telling Texas’s scumbag attorney general, and 17 other scumbag attorneys general, and 126 scumbag congressmen, to go fuck themselves. AG Paxton’s lawsuit, arguing to invalidate the election results in four states, was garbage in every conceivable way, and in a few inconceivable ways. One of those inconceivable ways was mathematical, which is why we are here.

As David Post wrote about here and then here, Paxton’s original motion claimed powerful statistical evidence, giving “substantial reason to doubt the voting results in the Defendant States” (paragraphs 9 – 12). In particular, Paxton claimed that Trump’s early lead in the voting was statistically insurmountable (par 10):

“The probability of former Vice President Biden winning the popular vote in the four Defendant States—Georgia, Michigan, Pennsylvania, and Wisconsin—independently given President Trump’s early lead in those States as of 3 a.m. on November 4, 2020, is less than one in a quadrillion, or 1 in 1,000,000,000,000,000.”

Similarly, Paxton looked to Trump’s defeat of Clinton in 2016 to argue the unlikelihood of Biden’s win in these states (par 11):

“The same less than one in a quadrillion statistical improbability … exists when Mr. Biden’s performance in each of those Defendant States is compared to former Secretary of State Hilary Clinton’s performance in the 2016 general election and President Trump’s performance in the 2016 and 2020 general elections.”

On the face of it, these claims are, well, insane. So, what evidence did Paxton produce? It appeared in Paxton’s subsequent motion for expedited consideration, in the form of a Declaration to the Court by “Charles J. Cicchetti, PhD” (pages 20-29). Cicchetti’s Declaration has to be read to be believed.

Cicchetti‘s PhD is in economics, and he is a managing director of a corporate consulting group called Berkeley Research Group. BRG appears to have no role in Paxton’s suit, and Cicchetti doesn’t say how he got involved; he simply writes that he was “asked to analyze some of the validity and credibility of the 2020 presidential election in key battleground states”. Presumably, Paxton was just after the best.

It is excruciating to read Cicchetti’s entire Declaration, but there is also no need. Amongst all the Z-scores and whatnot, Cicchetti’s argument is trivial. Here is the essence of Cicchetti’s support for Paxton’s statements above.

In regard to Trump’s early lead, Cicchetti discusses Georgia, comparing the early vote and late vote distributions (par 15):

“I use a Z-score to test if the votes from the two samples are statistically similar … There is a one in many more than quadrillions of chances that these two tabulation periods are randomly drawn from the same population. 

Similarly, in regard to Biden outperforming Clinton in the four states, Cicchetti writes

 “I tested the hypothesis that the performance of the two Democrat candidates were statistically similar by comparing Clinton to Biden … [Cicchetti sprinkles some Z-score fairy dust] … I can reject the hypothesis many times more than one in a quadrillion times that the two outcomes were similar.”

And, as David Post has noted, that’s all there is. Cicchetti has demonstrated that the late Georgia votes skewed strongly to Biden, and that Biden outperformed Clinton. Both of which everybody knew was gonna happen and everybody knows did happen.

None of this, of course, supports Paxton’s claims in the slightest. So, was Cicchetti really so stupid as to think he was proving anything? No, Cicchetti may be stupid but he’s not that stupid; Cicchetti briefly addresses the fact that his argument contains no argument. In regard to the late swing in Georgia, Cicchetti writes (par 16)

“I am aware of some anecdotal statements from election night that some Democratic strongholds were yet to be tabulated … [This] could cause the later ballots to be non-randomly different … but I am not aware of any actual [supporting] data …”

Yep, it’s up to others to demonstrate that the late votes went to Biden. Which, you know they kind of did, when they counted the fucking votes. As for Biden outperforming Clinton, Cicchetti writes (par 13),

“There are many possible reasons why people vote for different candidates. However, I find the increase of Biden over Clinto is statistically incredible if the outcomes were based on similar populations of voters …”

Yep, Cicchetti finds it “incredible” that four years of that motherfucker Trump had such an effect on how people voted.

What an asshole.

WitCH 47: A Bad Inflection

The question below is from the first 2020 Specialist exam (not online). It has been discussed in the comments here, and the main issues have been noted, but we’ve decided the question is sufficiently flawed to warrant its own post.

UPDATE (10/09/21) For those who’d placed a wager, the examination report (Word-doc-VCAA-stupid) indicates that a second derivative argument was expected. Hence, thousands of VCE students no longer have any sense of what VCAA means by “hence”.

WitCH 46: Paddling in the Gene Pool

The question below is from the first Methods exam (not online), held a few days ago, and which we’ll write upon more generally very soon. The question was brought to our attention by frequent commenter Red Five, and we’ve been pondering it for a couple days; we’re not sure whether it’s sufficient for a WitCH, or is a PoSWW, or is just a little silly. But, whatever it is, it’s pretty annoying, so what the hell.

Bernoulli Trials and Tribulations

This one feels relatively minor to us. It is, however, a clear own goal from the VCAA, and it is one that has annoyed many Mathematical Methods teachers. So, as a public service, we’re offering a place for teachers to bitch about it.*

One of the standard topics in Methods is the binomial distribution: the probabilities you get when repeatedly performing a hit-or-miss trial. Binomial probability was once a valuable and elegant VCE topic, before it was destroyed by CAS. That, however, is a story is for another time; here, we have smaller fish to fry.

The hits-or-misses of a Binomial distribution are sometimes called Bernoulli trials, and this is how they are referred to in VCE. That is just jargon, and it doesn’t strike us as particularly useful jargon, but it’s ok.** There is also what is referred to as the Bernoulli distribution, where the hit-or-miss is performed exactly once. That is, the Bernoulli distribution is just the n = 1 case of the binomial distribution. Again, just jargon, and close to useless jargon, but still sort of ok. Except it’s not ok.

Neither the VCE study design nor, we’re guessing, any of the VCE textbooks, makes any reference to the Bernoulli distribution. Which is why the special, Plague Year formula sheet listing the Bernoulli distribution has caused such confusion and annoyance:

Now, to be fair, the VCAA were trying to be helpful. It’s a crazy year, with big adjustments on the run, and the formula sheet*** was heavily adapted for the pruned syllabus. But still, why would one think to add a distribution, even a gratuitous one? What the Hell were they thinking?

Does it really matter? Well, yes. If “Bernoulli distribution” is a thing, then students must be prepared for that thing to appear in exam questions; they must be familiar with that jargon. But then, a few weeks after the Plague Year formula sheet appeared, schools were alerted and VCAA’s Plague Year FAQ sheet**** was updated:

This very wordy weaseling is VCAA-speak for “We stuffed up but, in line with long-standing VCAA policy, we refuse to acknowledge we stuffed up”. The story of the big-name teachers who failed to have this issue addressed, and of the little-name teacher who succeeded, is also very interesting. But, it is not our story to tell.

 

*) We extend our standard apology to all precious statisticians for our language.

**) Not close to ok is the studied and foot-shooting refusal of the VCAA and textbooks to use the standard and very useful notation q = 1 – p.

***) Why on Earth do the exams have a formula sheet?

****) The most frequently asked question is, “Why do you guys keep stuffing up?”, but VCAA haven’t gotten around to answering that one yet.

WitCH 33: Below Average

We’re not actively looking for WitCHes right now, since we have a huge backlog to update. This one, however, came up in another context and, after chatting about it with commenter Red Five, there seemed no choice. The following 1-mark multiple choice question appeared in 2019 Exam 2 (CAS) of VCE’s Mathematical Methods. The problem was to determine Pr(X > 0), the possible answers being

A. 2/3      B. 3/4      C. 4/5      D. 7/9      E. 5/6

Have fun.

Update (04/07/20)

Who writes this crap? Who writes such a problem, who proofreads such a problem, and then says “Yep, that’ll work”? Because it didn’t work, and it was never going to. The examination report indicates that 27% of students gave the correct answer, a tick or two above random guessing.
 
We’ll outline a solution below, but first to the crap. The main awfulness is the double-function nonsense, defining the probability distribution \boldsymbol{f} in terms of pretty make the same function \boldsymbol{p}. What’s the point of that? Well, of course \boldsymbol{f} is defined on all of \boldsymbol{R} and \boldsymbol{p} is only defined on \boldsymbol{[-a,b]}. And, what’s the point of defining \boldsymbol{f} on all of \boldsymbol{R}? There’s absolutely none. It’s completely gratuitous and, here, completely ridiculous. It is all the worse, and all the more ridiculous, since the function \boldsymbol{p} isn’t properly defined or labelled piecewise linear, or anything; it’s just Magritte crap. 
 
To add to the Magritte crap, commenter Oliver Oliver has pointed out the hilarious Dali crap, that the Magritte graph is impossible even on its own terms. Beginning in the first quadrant, the point \boldsymbol{(b,b)} is not quite symmetrically placed to make a 45^{\circ} angle. And, yeah, the axes can be scaled differently, but why would one do it here? But now for the Dali: consider the second quadrant and ask yourself, how are the axes scaled there? Taking a hit of acid may assist in answering that one.
 
Now, finally to the problem. As we indicated, the problem itself is fine, its just weird and tricky and hellishly long. And worth 1 mark. 
 
As commenters have pointed out, the problem doesn’t have a whole lot to do with probability. That’s just a scenario to give rise to the two equations, 
 
1) \boldsymbol{a^2 \ +\ \frac{b}{2}\left(2a+b\right) = 1} \qquad      \mbox{(triangle + trapezium = 1).}
 
and
 
2) \boldsymbol{a + b = \frac43} \qquad           \mbox{( average = 3/4).}
 
The problem is then to evaluate
 
*) \boldsymbol{\frac{b}2(2a + b)} \qquad \mbox{(trapezium).}
 
or, equivalently, 
 
**) \boldsymbol{1 - a^2 \qquad} \mbox{(1 - triangle).}
 
 
The problem is tricky, not least because it feels as if there may be an easy way to avoid the full-blown simultaneous equations. This does not appear to be the case, however. Of course, the VCAA just expects the lobotomised students to push the damn buttons which, one must admit, saves the students from being tricked.
 
Anyway, for the non-lobotomised among us, the simplest approach seems to be that indicated below, by commenter amca01. First multiply equation (1) by 2 and rearrange, to give
 
3) \boldsymbol{a^2 + (a + b)^2 = 2}.
 
Then, plugging in (2), we have 
 
4) \boldsymbol{a^2 = \frac29}.
 
That then plugs into **), giving the answer 7/9. 
 
Very nice. And a whole 90 seconds to complete, not counting the time lost making sense of all the crap. 

The Super-Rigging of Gambling

Last week, the ABC set to bashing bet365, bringing to light some of the huge betting company’s unsavoury practices. To which we respond, “Well done”. And, “Well, duh”.

The ABC noted a number of dodgy tactics employed by bet365, writing it all up as astonishing revelation. Perhaps the ABC reporters and their cloistered readers were astonished, but many Australian gamblers would have simply yawned. All gambling companies employ similar tactics and they’ve always done it. It is not new and it is not news. It is all part of the standard super-rigging of gambling.

To begin, it is no secret that gambling is rigged; even bad gamblers know that the odds are stacked against them. Mathematically, the rigging of a game is expressed in terms of expectation. In a fair game the average or “expected” win is zero. For example, flipping a coin in the natural win-lose manner is fair. By comparison, roulette has 37 possible outcomes but the payouts are calculated as if there were only 36 numbers. (The payout is “even money” if you bet on “red” or “black”, and the payout is “35 to 1” if you bet on a number.) This implies that the average loss per spin on roulette is 1/37 of the amount bet, or an expectation of about -3%. The expectation being negative indicates the rigging.

Given that gambling institutions intend to offer only rigged, negative expectation games, what can punters do about it? Lots, and not much. They can cheat, of course. Or, they can be become experts on horses or golfers or whatever. Or, they can look for mechanical or human flaws. There’s a surprising number of avenues to explore as well as, of course, many dead ends. (To illustrate the subtlety, we’ve included a few gambling puzzles at the end of the post.) Finding and exploiting opportunities, however, takes work and/or sophistication and/or capital. There’s lunch there, but it’s not free.

So, as a general rule, punters are left with only losing games to play. But how, then, does a gambling site entice a punter to play a game of negative expectation?

Yes, it’s a stupid question. Obviously there’s no shortage of punters willing to bet on appallingly bad games. But, if you run a gambling site, the real question is how to get the punter to gamble on your site. And that’s where one form of super-rigging begins. Super-rigging is making a betting opportunity appear better than it is. This is built in to the way poker machines work, and betting sites do it as a matter of routine.

In betting, top sites have various ways of enticing punters. To begin, there are sign-up bonuses. So, for example, you might sign up with a $200 deposit and the site will throw in $100 of “free bets”. That’s akin to signing up for ten sessions at a gym and getting a few “free” lessons chucked in. It’s basically fine, with what you see being pretty much what you get. After that, however, there are innumerable betting “promotions”, many blasting out from the TV and destroying everyone’s enjoyment of the footy. (Unless you’re a Saints fan, in which case any distraction from the actual game is considered a plus.)

The effect of gambling promotions is to change the expectation of the bets. For example, a very common offer is “money back” if the punter bets on a horse and that horse comes 2nd or 3rd. (That “money back” is most commonly in the form of a “free bet” equal to the size of the original wager, which is an important distinction but one we can ignore here.) Then, given a good horse may have, say, a 30% chance of coming 2nd or 3rd, an expectation of about -10% may become an expectation of about +20%. There’s no guarantee of winning on that race, of course, but it’s now a sensible bet. These promotions are obviously attractive to punters.

How do the betting sites avoid losing a ton of money on these promotions? Often they don’t have to do much of anything. To begin, most promotions will come with a relatively small maximum bet size, of $50 or so; this is fair enough, just the same as Coles limiting some sale item to “five per customer”. Beyond that, the promotion can be pretty much what it appears to be, in itself a loser for the company but good advertising to get the punters onto the site to bet further. But there are also traps and nasty tricks.

First of all, betting promotions vary dramatically in value, with more than a few being close to worthless. They can be analogous to Motor Heaven blaring that a car is “50% off”, after having doubled the price the previous week. Secondly, even valuable promotions can be used poorly. The horse promotion above, for example, would be essentially worthless if used to bet on a massive favourite or a sluggish also-ran. Again, one might compare this to a commercial situation, say Harvey Norman giving $10 off on any one item in the store and someone using that offer when buying an overpriced TV.

Amidst all the noise, however, there are many good promotions that can create positive expectation on small bets when used intelligently. So, what happens then? Then what happens is what the ABC story is all about.

The gambling sites simply nobble any punter who is not a loser, in any manner they can: they will refuse to offer the promotions; they will limit the size of bets to approximately zero; they will lower the odds. What does that leave? It leaves the betting sites screaming out their offers, everywhere. But, any gambler who is halfway successful is banned from their offers, if not entirely.

And that is the super-rigging. The betting sites pretend they are offering positive expectation, but they will only continue that offer for people who use the offer in a useless manner. And, unlike the other aspects we have mentioned, such nasty practice has no commercial analogy that anyone would regard as acceptable. Imagine going into Harvey Norman and being shoved out the door, with some thug yelling “You only buy items on special, so we don’t want you here”. It is unthinkable at Harvey Norman but, in the context of gambling, it is universal.

How can the betting sites get away with this nastiness? Because the ACCC, the federal body responsible for overseeing and enforcing consumer law, is all bark and no bite. And, because the state governments and government regulators only care about whether they’re getting their cut of the loot.

It is obscene. And, as we indicated, none of it is news.

 

PUZZLES

Here are three gambling puzzles. If you are familiar with the puzzles and are sure you already know the answers, then please refrain from commenting for a while, leaving others free to think about them.

Puzzle 1. You are gambling on roulette, which has 18 red numbers, 18 black numbers and 1 green number (the zero). You watch the wheel spin and the ball lands on a red number. What colour should you bet on next, red or black? Or, doesn’t it matter?

Puzzle 2. A casino gives you a free bet of $10. You can place the bet on any standard casino game, or on a horse, or whatever. If the bet wins, you get your winnings as usual. (For example, if you bet “red” on roulette and win, you’d win $10.) Win or lose, the casino keeps the coupon. How much is the free bet worth?

Puzzle 3. You have found a betting game with positive expectation; it’s win-lose (like betting on “red” or “black” in roulette), but you have a 55% chance of winning and only a 45% chance of losing. You start with $1000 and hope to double your money. What is the probability that you will succeed before losing your $1000?

The Median is the Message

Our first post concerns an error in the 2016 Mathematical Methods Exam 2 (year 12 in Victoria, Australia). It is not close to the silliest mathematics we’ve come across, and not even the silliest error to occur in a Methods exam. Indeed, most Methods exams are riddled with nonsense. For several reasons, however, whacking this particular error is a good way to begin: the error occurs in a recent and important exam; the error is pretty dumb; it took a special effort to make the error; and the subsequent handling of the error demonstrates the fundamental (lack of) character of the Victorian Curriculum and Assessment Authority.

The problem, first pointed out to us by teacher and friend John Kermond, is in Section B of the exam and concerns Question 3(h)(ii). This question relates to a probability distribution with “probability density function”

    \[  f(x) =   \left\{\aligned &\frac{(210-x)e^{\frac{x-210}{20}}}{400} \qquad && 0\leqslant x \leqslant 210,\\ &0 && \text{elsewhere.} \endaligned\right.}\]

Now, anyone with a good nose for calculus is going to be thinking “uh-oh”. It is a fundamental property of a PDF that the total integral (underlying area) should equal 1. But how are all those integrated powers of e going to cancel out? Well, they don’t. What has been defined is only approximately a PDF,  with a total area of 1 - 23/2e^{21/2} \approx 0.9997. (It is easy to calculate the area exactly using integration by parts.)

Below we’ll discuss the absurdity of handing students a non-PDF, but back to the exam question. 3(h)(ii) asks the students to find the median of the “probability distribution”, correct to two decimal places. Since the question makes no sense for a non-PDF, of course the VCAA have shot themself in the foot. However, we can still attempt to make some sense of the question, which is when we discover that the VCAA has also shot themself in the other foot.

The median m of a probability distribution is the half-way point. So, in the integration context here we want the m for which

a)      \phantom{\quad}  \int\limits_0^m f(x)\,{\rm d}x = \dfrac12.

As such, this question was intended to be just another CAS exercise, and so both trivial and pointless: push the button, write down the answer and on to the next question. The problem is, the median can also be determined by the equation

b)     \phantom{\quad}  \int\limits_m^{210} f(x)\,{\rm d}x = \dfrac12,

or by the equation

c)     \phantom{\quad} \int\limits_0^m f(x)\,{\rm d}x = \int\limits_m^{210} f(x)\,{\rm d}x.

And, since our function is only approximately a PDF, these three equations necessarily give three different answers: to the demanded two decimal places the answers are respectively 176.45, 176.43 and 176.44. Doh!

What to make of this? There are two obvious questions.

1. How did the VCAA end up with a PDF which isn’t a PDF?

It would be astonishing if all of the exam’s writers and checkers failed to notice the integral was not 1. It is even more astonishing if all the writers-checkers recognised and were comfortable with a non-PDF. Especially since the VCAA can be notoriously, absurdly fussy about the form and precision of answers (see below).

2. How was the error in 3(h)(ii) not detected?

It should have been routine for this mistake to have been detected and corrected with any decent vetting. Yes, we all make mistakes. Mistakes in very important exams, however, should not be so common, and the VCAA seems to make a habit of it.

OK, so the VCAA stuffed up. It happens. What happened next? That’s where the VCAA’s arrogance and cowardice shine bright for all to see. The one and only sentence in the Examiners’ Report that remotely addresses the error is:

“As [the] function f  is a close approximation of the [???] probability density function, answers to the nearest integer were accepted”. 

The wording is clumsy, and no concession has been made that the best (and uniquely correct) answer is “The question is stuffed up”, but it seems that solutions to all of a), b) and c) above were accepted. The problem, however, isn’t with the grading of the question.

It is perhaps too much to expect an insufferably arrogant VCAA to apologise, to express anything approximating regret for yet another error. But how could the VCAA fail to understand the necessity of a clear and explicit acknowledgement of the error? Apart from demonstrating total gutlessness, it is fundamentally unprofessional. How are students and teachers, especially new teachers, supposed to read the exam question and report? How are students and teachers supposed to approach such questions in the future? Are they still expected to employ the precise definitions that they have learned? Or, are they supposed to now presume that near enough is good enough?

For a pompous finale, the Examiners’ Report follows up by snarking that, in writing the integral for the PDF, “The dx was often missing from students’ working”. One would have thought that the examiners might have dispensed with their finely honed prissiness for that one paragraph. But no. For some clowns it’s never the wrong time to whine about a missing dx.

UPDATE (16 June): In the comments below, Terry Mills has made the excellent point that the prior question on the exam is similarly problematic. 3(h)(i) asks students to calculate the mean of the probability distribution, which would normally be calculated as \int xf(x)\,{\rm d}x. For our non-PDF, however, we should should normalise by dividing by \int f(x)\,{\rm d}x. To the demanded two decimal places, that changes the answer from the Examiners’ Report’s 170.01 to 170.06.

UPDATE (05/07/22): The examination report was updated on 18/07/20, and now (mostly) fesses up to the nonsense in 3(h)(ii). There is still no submission for the parallel nonsense in 3(h)(i).