Skip to main content

Why Misinformation Bill risks Freedoms it Aims to Protect

Technology and Human Rights
Fake news

This opinion piece by Human Rights Commissioner Lorraine Finlay appeared in The Australian on Thursday 24 August 2023.

Despite being labelled the “word of the decade” in 2021, fake news is not a modern phenomenon. Misinformation has been spread for political gain since Octavian used fake news to discredit Mark Antony in ancient Rome.

What is different today is the way modern technology makes it easier to spread fake news around the world but harder to distinguish fact from fiction. Misinformation and disinformation can have devastating effects on human rights, social cohesion and democratic processes.

Australia needs to address these risks. But this needs to be balanced with ensuring we don’t unduly affect freedom of expression.

This is the key problem with the federal government’s proposed Communications Legislation Amendment (Combating Misinformation and Disinformation) Bill. The draft bill aims to give the Australian Communications and Media Authority increased powers to combat online misinformation and disinformation, but in a way that does not find equilibrium between censorship of objectively untrue content and protection for freedom of expression.

Concerns about whether the draft bill strikes the right balance have been expressed by a range of groups, including social media companies such as Meta, legal experts such as the Victorian Bar Council, and the Media, Entertainment and Arts Alliance (which represents more than 15,000 workers in media and cultural sectors). The full extent of feedback about the draft bill is not yet known, with public submissions to the government’s consultation process closing last week but publication of those submissions being delayed until next month.

The Australian Human Rights Commission submission, which already has been made public, highlights four key concerns about the draft bill.

The first issue is the overly broad and vague way key terms – such as misinformation, disinformation and harm – are defined. Laws targeting misinformation and disinformation require clear and precise definitions.

Drawing a clear line between truth and falsehood is not always simple, and there may be legitimate differences in opinion as to how content should be characterised. The broad definitions used here risk enabling unpopular or controversial opinions or beliefs to be subjectively labelled as misinformation or disinformation, and censored as a result.

The second key problem is the low harm threshold established by the proposed law. Content that is “reasonably likely to cause or contribute to serious harm” risks being labelled as misinformation or disinformation. The categories of harm are themselves extremely broad, including things like “harm to the health of Australians” and “harm to the Australian environment”. Reasonable people may have very different views about what constitutes harm under these categories. The definitions also provide no guidance about how harm is meant to be judged.

It is true that what is required under the bill is not just harm but serious harm. The effect of this, however, is uncertain given the proposed law does not go on to define serious harm. It further requires only that the content has to be “reasonably likely to cause or contribute to serious harm”. Content can be labelled as misinformation even if it does not actually cause harm – it only has to be “reasonably likely to do so”.

Further, the harm threshold is not limited to causation but requires only contribution, and no minimum level of contribution is stated. This leaves open the possibility that even a minor or tangential contribution will be sufficient. The harm threshold established under this draft bill is extremely low, which risks allowing an extremely broad range of content potentially to be restricted.

The third concern highlighted by the commission is the way the proposed law defines excluded content, which is content that is protected from being labelled as misinformation or disinformation.

One key example here is that the draft bill defines any content that is authorised by the government as being excluded content. This means government information cannot, by definition, be misinformation or disinformation under the law.

This fails to acknowledge the reality that misinformation and disinformation can come from government. Indeed, government misinformation and disinformation raises particular concerns given the enhanced legitimacy and authority that many people attach to information received from official government sources.

This specific exclusion privileges government content but fails to accord the same status to content authorised by the opposition, minor parties or independents.

The result is that government content can never be misinformation but content critical of the government produced by political opponents might be. Any law censoring online information to counter misinformation and disinformation must be scrupulously impartial and apolitical.

The fourth concern relates to the powers to regulate digital content that are granted under the draft bill to digital platform providers and (indirectly) the ACMA.

There are inherent dangers in allowing any one body – whether it be a government department or social media platform – to determine what is and is not censored content. The risk here is that efforts to combat misinformation and disinformation could be used to legitimise attempts to restrict public debate and censor unpopular opinions.

Striking the right balance between combating misinformation or disinformation and protecting freedom of expression is a challenge with no easy answer.

While we need to respond to the risks posed by misinformation and disinformation (which realistically will involve some degree of proscription about what kind of content can appear online), this draft bill does not strike the right balance. Regardless of how future efforts to combat misinformation and disinformation may better find equilibrium between these competing tensions, there needs to be strong transparency and scrutiny safeguards to protect freedom of expression. It is these mechanisms that are sorely missed in the draft bill’s current form.

If we fail to ensure robust safeguards for freedom of expression online, then the measures taken to combat misinformation and disinformation could themselves risk undermining Australia’s democracy and freedoms.

Lorraine Finlay