A couple day ago we wrote about the Federal Government’s new, insidious plan to legislate on “misinformation and disinformation”. That distracted us from our writing about insidious legislation already in place, and the insidious nature of its recent application. Here we go.
Last week, Twitter was hit with a “demand” from Julie Inman Grant, Australia’s eSafety Commissioner. Fanfared with an eSafety media release, a “Twitter must come clean” sermon from Inman Grant herself, and a predictably gullible SMH puff piece from champion stenographer, Jordan Baker, the demand was in form of a legal notice, “seeking information about what the social media giant is doing to tackle online hate on the platform”.
To be clear, Twitter has always had a significant cesspool element. It would also be no surprise, as is claimed by eSafety, if Twitter were getting cesspoolier. But that doesn’t mean that eSafety’s overlording is even in principle appropriate, and it doesn’t mean that Inman Grant is remotely trustworthy. It is not, and she is not.
Almost every line of eSafety’s media release and of Inman Grant’s companion sermon feels manipulative. We don’t trust any of it. We don’t have time, however, to engage in a thorough fact check. We’ll concentrate upon some Neon Bad Aspects, and readers are invited to explore further.
Near the start of her sermon, Inman Grant expresses how much hope she once had for Twitter:
Due to the spontaneous, open and viral nature of Twitter, I once believed that no other platform held such promise of delivering true equality of thought and free expression. On the back of the remarkable Arab Spring transformation, sometimes referred to in the Middle East as the “Twitter Revolution,” I was so convinced of the company’s potential for positive social change that I went to work there in 2014.
This is dangerous and entirely inappropriate territory for a government censor. Yes, we’re all for “positive social change”, except, of course, that what is “positive” can be very much in the eye of the beholder. It is not helped by Inman Grant’s weird and dangerously vague phrase, “true equality of thought and free expression”. Yes, she may simply be referring to the democratisation of publishing, but there is good reason to believe she is thinking differently, that “true equality” differs from “equality”. Inman Grant appears to have Orwelled the genuine free speech issue of the Heckler’s veto into a censoring trump card.
Of course Twitter changed dramatically with its purchase by Elon Musk in October, 2022. eSafety states that they have witnessed a rise in complaints about “online hate”, and the media release and sermon both link this to Musk’s takeover:
I am concerned that this may be linked to Twitter’s “general amnesty,” offered last November to around 62,000 permanently banned account holders. To be permanently banned from Twitter means repeated and egregious violations of the Twitter Rules. Seventy-five of these reinstated abusive account holders reportedly have over 1 million followers, meaning a small few may be potentially contributing to an outsized impact on the platform’s toxicity.
This makes it clearer what Inman Grant means by “true equality”. Evidently, some Twitter accounts are more equal than others.
The above paragraph is stunningly obtuse, and probably wilfully dishonest. We thank God we’re not in the business of monitoring and (self)-policing social media, and of course there will be cause to suspend or ban certain accounts. But if Inman Grant is not aware that Twitter had a systemic issue with accounts being banned and shadow-banned and throttled for poor or invalid or arbitrary or entirely invisible reasons then she has no business commenting on Twitter, much less policing it. Secondly, Inman Grant is especially concerned with the reinstatement of an “abusive account holder” with a million+ followers, without stopping for a moment to consider what the banning of such an account meant in the first place. If a million people are following someone, even if that someone is an abusive asshole – and that should not be taken for granted in the manner Inman Grant unthinkingly does – then a hell of a lot of people are being offered something by that asshole. It may not be something that is good for society in any objective manner but there is something there, and it is far from clear that simply banning such an account does anything to address whatever real issue there may be, rather than simply making it worse by disenfranchising multitudes of angry people. These are difficult and subtle issues, but Inman Grant has all the subtlety of a Censor Truck.
As for the legal notice to Twitter, eSafety declared that,
[the notice was issued] under section 56(2) of the Online Safety Act. This notice requires Twitter to explain what it is doing to minimise online hate, including how it is enforcing its terms of use and hateful conduct policy. [emphasis added]
For the announcement of a legal notice this appears to be astonishingly inaccurate: the term “hate” appears nowhere in the Online Safety Act, with s56(2) referring only to “basic online safety expectations”. Of course “hate” and “hateful conduct” will be an aspect of “online safety”, but one must know what eSafety considers these terms to mean. In fact, one can get some sense of eSafety’s conceptualisation of “hate”, although it takes a ridiculous amount of work to do so.
Inman Grant’s sermon and the media release quoting the sermon both lean on the prevalence of “online hate”:
eSafety’s own research shows that shows that nearly 1 in 5 Australians have experienced some form of online hate.
No time period is given for when this “online hate” was “experienced”, no definition is even suggested, and no reference is provided for eSafety’s “research”. Hunting a little, it seems that the “nearly 1 in 5” figure comes from eSafety’s 2022 survey of “Negative Online Experiences”, released in February this year; the accompanying media release notes that
Almost one in five Australian adults reported at least one of these [various and negative] experiences in the last 12 months
So far, all that appears to have been published of this “research” are two infographics, neither of which uses the term “hate”. If the “nearly 1 in 5” figure comes from somewhere else, however, we cannot find it. It seems unlikely.
Before continuing our hunt for eSafety’s definition of “hate”, it is worth making a couple comments on this “nearly 1 in 5” figure, whatever it means. First of all, eSafety’s survey was conducted in 2022, and almost certainly prior to Musk’s purchase of Twitter. As such, the figure has absolutely no relevance to whatever the effect might have been of Musk’s “amnesty”. Secondly, and much more importantly, eSafety’s “research” does not by any stretch “show” what Australians experienced: all it “shows” is what certain Australians, selected in an unknown manner and choosing to answer in unknown proportions for unknown reasons, claim to have experienced. This is not to pretend that these claims have no validity, but it is obviously absurd to take such claims as automatic truth, as Inman Grant does. And again, this is all still without any clue what the people surveyed were asked.
Searching more, we find eSafety’s earlier, 2019 survey. For this survey there is both an Infographic and a full report, both of which make clear the eSafety’s meaning of “hate speech”. The report notes (p6) that the prevalence of “online hate speech” was based upon responses to the question,
“Have you received digital communication that offended, discriminated, denigrated, abused and/or disparaged you because of your personal identity/beliefs (e.g. race, ethnicity, gender, nationality, sexual orientation, religion, age, disability, etc.)?”
OK, fine, sort of. But would one naturally refer to all such occurrences of “online hate speech” as instances of “online hate”? Perhaps a one-off slur could be regarded as “online hate”, but plenty such instances seem like pretty small beer. Moreover, someone may be “offended” by criticism of their “personal … beliefs”, but it is very far from automatic that such criticism amounts to “hate”, even when strongly worded. In sum, and although there is a clear problem with online abuse, and more fundamentally the diminishing concern for (once) common decency, that “nearly 1 in 5” figure feels pretty “meh”.
Finally, a word about eSafety’s referencing of some fellow Twitter-bashers. The media release and sermon both note that the Anti-Defamation League, GLAAD and the Centre for Countering Digital Hate reported a significant increase in abusive posts on Twitter following Musk’s appearance. Maybe true, but these groups have their own agendas and they cannot remotely be considered objective sources: anything they say should be taken with a big dose of salt. The Anti-Defamation League, in particular, is notorious for crying wolf. As for GLAAD, just yesterday they organised the publication of an open letter, calling on the social media giants to censor, amongst other things, “malicious lies and disinformation about medically necessary healthcare for transgender youth”. Given that there’s a decent argument that plenty of this “medically necessary healthcare for transgender youth” is in reality the unnecessary, disgraceful Frankensteining of confused kids, we have a perfect example of what Inman Grant would presumably wish to censor wholesale as hate speech but which need be nothing of the sort. And then there’s CCHD, which Matt Taibbi has pegged:
An NGO cut-out engaged in brazen smearing, attacking of dissenting views, deplatforming, censoring and pro-active shrinkage of the Overton window.
CCHD are experts in strategically conflating serious voices with the fringes, mixing them together to isolate genuine actors and squash dissent. What is unique about CCHD is its blatant distortions, vicious tone, and cynical appropriation of anti-racist, anti-sexist, and public health rhetoric. The group promotes explicitly pro-censorship and deplatforming positions, and pushes the boundaries of the new normal.
Little wonder that Inman Grant is a fan.
What is going to happen now? Twitter has a few weeks to respond before the fines start rolling in. Jordan Baker notes that Twitter responded to SMH‘s enquiries in a less than diplomatic manner:
When Twitter was contacted for a response, it sent an automated reply with its standard response to press queries; a poo emoji.
One dearly hopes that Musk responds to Inman Grant in a similar manner.
It is probably too much to say that we hate Inman Grant, but we’re certainly disgusted by her, and we loathe what she is doing. This sanctimonious goon is way, way more dangerous than a few Twitter trolls and neo-nazi clowns.
More dangerous than online trolls and clowns, sure. But not more dangerous than communities that actively hate others and act on that hatred.
I think you’re right to whack this one, but I feel uneasy about what I’ve seen hateful groups get up to. Not so much in open communication and the lightning rods with a million+ followers. But the organisational support and facilitation of formation that comes from hate groups flocking around social media.
I’m not suggesting anything specific. All I’m saying is that this is tricky territory. Good post.
The violently hateful groups in Australia are small. The forces of Censorship and Right Thinking are large. The latter is a much greater threat.