The Queensland Civil and Administrative Tribunal (QCAT) in Australia has made a landmark ruling this week that X can be held liable for hate speech that is published on its platform.
By Lexy Hamilton-Smith and Angie Lavoipierre
The decision is a win for the Australian Muslim Advocacy Network (AMAN), which lodged a complaint in July 2022 accusing X of being responsible for publishing denigrating and hateful comments from a far-right conspiracy group, about Muslims being “an existential threat” to the world.
Social media companies such as X have often relied on the legal argument that they’re not responsible for what happens on foreign soil because they don’t do business there.
But that principle has now been challenged.
According to AMAN’s legal advisor Rita Jabri Markwell, she was first preparing the case, there was no other precedent — so this was the first of its kind.
“The significance of this decision is that we now know that local hate speech laws do apply to social media companies. Usually people will bring vilification complaints against other individuals. But now they can take direct action against the companies that are profiting from that hate,” says Markwell.
Despite requests from AMAN, X refused to remove or geoblock posts that allegedly vilified Australian Muslims, under Queensland’s Anti-Discrimination Act.
According to X CEO Elon Musk, the company should be exempt.
But this week’s ruling rejects that.
“They were claiming that it would be preposterous for a Queensland tribunal to exert some sort of power over a global social media company. We contested that, on many levels and at the end of the day, it was enough to show that they are profiting from local markets and communities here, through collecting data and advertising. This could become a precedent that will carry weight in other jurisdictions, whether it’s at the federal level, or whether it’s under other vilification laws. Previously it’s been very uncertain whether those laws apply to social media companies, it’s been assumed by many that they do, but the tribunal’s initial finding is significant because it pierces a favourite legal shield of social media giants. Now, we are on much firmer terrain because we have a very detailed set of reasons using a range of very established authorities to show that vilification laws do apply to social media companies.
With Australia being the home of the Christchurch attacker — he was born in Grafton, NSW, Markwell says that there’s been a lot of research showing how dangerous this kind of dehumanising disinformation is, in that it has driven people to acts of terrorism.
The posts in question were published in the aftermath of the 2019 Christchurch mosque attacks, in which 51 people were killed.
“Muslims here are very sensitive to the fact that we need to protect our communities from that kind of hate mongering online,” says Markwell.
The QCAT decision comes on the heels of a conflict between Australia’s online safety regulator and X over its refusal to take down video footage of the stabbing of bishop Mar Mari Emmanuel in April, which was live streamed.
Tension rose after X hid 65 tweets containing video of the attack, rather than agreeing to the take down order — the eSafety commissioner Julie Inman won an interim order in the Federal Court, but failed to make the ban permanent.According to University of Queensland free speech expert Kath Gelber, this ruling is an important win.
“Every time that there’s a public decision like this — that draws the line in the sand and says that you can’t vilify an entire group of people based on their identity — then that is a win and I’m pleased to see the result of this case. It is an important precedent that anti-vilification law applies to material that can be viewed and seen in the jurisdiction in which that law operates. It is interesting and important because recently we’ve seen pushback by some global internet companies, trying to say that because the home base of their jurisdiction is outside of Australia, and because they have global remit, that Australian law shouldn’t necessarily be applicable to them. So it’s a win for the target communities, because they can continue to say ‘look the law is designed to educate people about how to exercise their free speech rights responsibly, in ways that don’t harm others,” says Gelber.
However Professor Gelber says the ruling wouldn’t allow an immediate order to make X take down the “hate speech” comments, as it would first require mediation between the two parties.
“I think it would be a pretty good bet to suggest that that mediation wouldn’t work. After that process has been attempted at least, then it is within the rights of any complainant to refer them either to a tribunal or court, depending on the jurisdiction. In that event, there could be an order to take down material at the end of the day,” says Gelber.
The company can appeal the tribunal’s decision via the Queensland Supreme Court.
The Muslim Advocacy Network is still awaiting a separate decision as to whether X breached the law by failing to remove or hide the alleged hate speech.
AMAN also has a legal complaint against Meta and Facebook Australia, currently before the Human Rights Commission.
Meta, which owns Facebook, has also been fighting a case against the national privacy commissioner over the Cambridge Analytica scandal on jurisdictional grounds, arguing that it’s not subject to Australian law.
Note from the Editor: This story has been edited from its original publication here.