Witch 100: The Inequality of Robodebt

We were hoping for a special for our hundredth WitCH, but the chips fall when they fall. Still, it’s an odd one.

Robodebt is one of the greatest perversions of politics and public administration in Australian history. It is now reaching its appalling conclusion with the Royal Commission‘s hearings, a grotesque procession of half-wits, cowards and sociopathic goons. Rick Morton, and pretty much only Rick Morton, has covered the just-ended hearings in maddening and heart-rending detail. We only await Commissioner Holmes’s inevitably damning report.

We had pondering writing something on Robodebt, just to add our public declaration of disgust, and if only to employ the expression “Little Eichmanns”. But, we could see no natural angle. Now, however, a statistician has provided a different angle.

On Friday, statistician Noel Cressie had an article in The Conversation, titled,

Robodebt not only broke the laws of the land – it also broke laws of mathematics

It’s a cute, and obviously false, title, although a little flamboyance there is fine. But, what is Cressie getting at? Well,

The artificial intelligence (AI) algorithm behind Robodebt has been called “flawed”. But it was worse than that; it broke laws of mathematics. A mathematical law called Jensen’s inequality shows the Robodebt algorithm should have generated not only debts, but also credits.

Cressie creates a hypothetical example, and then considers this example in light of the Centrelink payment curve and Jensen’s inequality:

Because Will’s income was higher in 2019 and spread across the part where the payment curve is convex, Jensen’s inequality guarantees he would receive a Robodebt notice, even though there was no debt.

In 2018, however, Will’s income distribution was spread around smaller amounts where the curve is concave. So if Jensen’s inequality was adhered to, the AI algorithm should have issued him a “Robocredit” – but it didn’t.

Is there any sense or purpose to Cressie’s article? Make your judgment. Oh, and did we mention the Little Eichmanns?

 

UPDATE (28/03/23)

Robodebt is not complicated. In brief:

1) At the instigation of government ministers, The Department of Human Services sought to recover overpayments from welfare recipients.

2) The primary method by which supposed overpayments were determined was by “income averaging”. So taking, for example, a person’s declared income for a year and dividing by 26 to give the person’s supposed fortnightly income. Then, if these “fortnightly incomes” indicated an “overpayment”, DHS has hit the jackpot and a debt notice is sent out.

3) The method was automated: the “robo” part of Robodebt.

4) If a person did not reply to the debt notice, or if they could not find work receipts or whatever to prove they were not overpaid, DHS took the debt to be valid and instigated proceedings for recovery.

5) DHS’s method for determining supposed overpayments was self-evidently bullshit. They all knew it.

6) The entire program was self-evidently immoral. They all knew it.

7) The entire program, which placed the burden of proof of innocence on the welfare recipient, was self-evidently illegal. They all knew it.

That’s about it, although there is at least one other thing to add. Robodebt was only permitted to continue for the insane number of years that it did because of the obsequious and trivialised and goldfish-memoried news media. The publishers and reporters who covered Robodebt like a forgettable football match rather than as the great travesty that it was are almost as culpable as the gargoyle ministers and DHS goons.

So, what does the “first rate statistician” Noel Cressie have to add? A little, and it is interesting enough. What Cressie is adding, however, is almost impossible to discern from his article.

The main mathematical point of Cressie’s article is illustrated by the second kink in the Cressie’s youth allowance curve, above. We’ve reproduced a version of that portion of the curve, below, with a few numbers and a straight line included to help with our explanation.

The graph indicates that if a person has earned $1000 in a given fortnight then they will receive an extra $100 benefit for that fortnight, and this benefit declines linearly, to zero when the earnings reach $1100. (Our numbers are approximate, but they will suffice.)

So, now imagine the person’s earnings over two fortnights, of $1000 and $1200. Then, on average, the person correctly receives a benefit of $50 per fortnight, indicated by the green dot. However, taking the person’s average earnings for the two fortnights, the Robodebt approach pretends the person has earned $1100 each fortnight, and thus is entitled to nothing for either week.

Our numbers are simple, but Cressie’s point is that Robodebt’s underestimate will always occur with the real numbers, as long as the incomes being averaged straddle the kink in the graph: the “benefit” determined by first averaging the fortnightly incomes will always be lower than the correct average benefit derived from the correct fortnightly incomes. In brief, the red dot will always be below the green dot.

Why does this happen? Because of the “convexity” of the benefit curve near the kink point, guaranteeing that the averaging line, the green line, lies above the benefit curve. This is what Cressie is referring to with “Jensen’s inequality”, a broad generalisation of this line-convexity property of no use here.

There’s more to Cressie’s article, some accurate, some weirdly off-base and all poorly presented, which can all be approached and understood in the same manner. It’s a bad article.

But at least Cressie is condemning Robodebt. Which, as should not be forgotten, ever, was bullshit and immoral and illegal. And they all knew it.

23 Replies to “Witch 100: The Inequality of Robodebt”

  1. Did the algorithm actually break Jensen’s inequality? Didn’t the algorithm demonstrate it and hence the problem?

    Also Cressie does not seem to get what Centrelink was actually doing.

    From what I understand, there was no real AI used. Centrelink had an automated process of income averaging to find possible over payments and then had actual humans to investigate and prove if any debt was owed. What actually started causing the issues was someone changed the process and said income averaging DID indicate debt and no investigation was necessary. This reversed the onus of proof and reduced the human resources needed to investigate and recover debt. The politicians looked great for being efficient, improving the budget bottom line and cracking down on those pesky dole bludgers. So it was actually applying a system in a grossly inappropriate and careless way, not the mathematics of the algorithm, that was the issue.

  2. My usual tangent.

    Government are seeking to save money in their departments or quangos. So they bring in outside people either as consultants, or to administer these organizations. Such people have no deep knowledge of the work done, so they wing it, with the predictable chaos that ensues.

    The next scandal will be from the Bureau of Meteorology. For decades they were operated by scientists, quietly and effectively. The “government” wanted to impose restrictions on discussion of climate so imposed a management type to the head. This cancer rapidly worked it’s way down the organization. One clueless head decided to rewrite all software from the ground up; decades of trial and error was to be ignored. Outside consultants with no knowledge of computing in the scientific world were brought in to begin the process. Innocents who expressed alarm were silenced. After some years and hundreds of millions of dollars spent, this is now revealed to be going nowhere. Your taxes at work.

      1. Centrelink workers that I have met are professional and experienced with the complexity of the payment rules. My belief is that the RoboDebt software was built by outsiders with no experience in the department. They asked the government for feedback on their guesses and got the thuggish approval. Do you see the connection?

  3. Question – does Jensen’s inequality assume a continuous function?

    It seems to, but I’m not sure this has to be the case.

  4. OK, let me try a different question: if the curve is concave at point 1 and convex at point 2, does this always mean there is a point of inflection somewhere between points 1 and 2 if the curve is continuous at every point between 1 and 2?

    I’m still trying to understand Cressie’s argument, so am yet to decide whether I agree with it or not – hence my question is one of genuine curiosity, not leading anywhere.

    1. RF, I don’t think you need to get into these technicalities to figure out the meaning or merit of Cressie’s argument. Just take the curve to be joined-linear-bits, as Cressie as constructed it. (You can round off the kinks so the function is differentiable, but it doesn’t really matter.)

  5. Cressie is a first rate statistician.

    BTW, I liked the fact that the hypothetical Will Gossett was a student.

          1. I cannot judge; I have never heard of robodebt until you posted this, so I don’t know anything about it; sorry; I’m sure that others are better placed than I am to assess the article.

                1. You don’t have to have a go. Just acknowledge that it is possible for a first rate statistician to write a fifth rate article.

  6. OK… maybe I have totally missed something (it happens, often).

    Jensen’s inequality talks about a secant line, that is an average rate of change between two points.

    In this hypothetical argument, the author seems to be talking about tangent lines drawn at two different points on a curve.

    It seems to me (and again, I have likely missed the main point) that Cressie is talking about the average of two averages somehow and this just seems… odd. I can’t for the moment think of exactly why it seems odd, it just feels like a key assumption is flawed.

    EDIT: I’ve just read the article in full (amateur mistake not to do this first, apologies) and now I really think there is a flaw in the logic. I’ll think a bit more on it and give a thoughtful reply soon.

  7. A related paper is:
    Cressie, N. (2020). Comment: When is it Data Science and when is it Data Engineering? Journal of the American Statistical Association, 115(530), 660-662, DOI:10.1080/01621459.2020.1762619

    This article is less technical than the article in The Conversation in its discussion of Online Compliance Intervention (robo-debt) but more technical on broader statistical issues.

    Also interesting is Chapter 4 of
    Chan, J. and Saunders, P. (Eds) (2021). Big data for Australian social policy: Developments, benefits and
    risks. Academy of the Social Sciences in Australia, Canberra.

    Click to access Big-Data-for-Australian-Social-Policy.pdf

    1. Meh.

      Cressie’s loathing of Robodebt is of course entirely appropriate, and he expresses his loathing well in the ASA article. But I don’t think he gives any new or particularly good insight into Robodebt there, or even much tries. In the Conversation article he at least tries. He fails, but he tries.

  8. In addition to Potii’s comment at the top, the following thoughts came to me when reading the article through a bit more slowly:

    1. Is the comparison actually fair and reasonable? As in, if different time-frames are used, and one of those coincides with when an AI was used to calculate possible debts, then Jensen’s inequality does not need to apply as the “under payment” may fall outside the timeframe.

    2. Time is being used at points in the article as though it were (thanks Marty, I think I have the grammar correct now…?) the dependent variable, yet the graphic uses income as the independent variable.

    3. In addition to point 2, the word “average” by definition involves some form of division, so it is crucial to know what the denominator actually is: are we talking about average income? If so, does each “fortnight” have to be two weeks, or is there an average of the averages being taken somewhere?

    I’m not sure this all makes sense or if it is even close to the mark. It just seems to me that some assumptions have been made in the application of Jensen’s inequality that may not actually correspond to what happened in reality.

    1. Hi, RF. A big tick on the “were”. I think you’re basically getting what Cressie was trying to say, and why it is so hard to figure out what he was trying to say. I think I succeeded, but it wasn’t easy. Notwithstanding the first rateness of Cressie’s statisticking, his article is a complete mess. I’ll try to find time to update the post. He has an interesting point, it’s just impossible to see.

Leave a Reply

Your email address will not be published. Required fields are marked *

The maximum upload file size: 128 MB. You can upload: image, audio, video, document, spreadsheet, interactive, text, archive, code, other. Links to YouTube, Facebook, Twitter and other services inserted in the comment text will be automatically embedded. Drop file here