Skip to main content

‘Weaponised' AI an existential threat to truth, human rights

Technology and Human Rights
"A human head outlined by geometric patterns with the letters "AI" in the middle

This opinion piece by Human Rights Commissioner Lorraine Finlay appeared in The Australian on Monday 15 May 2023.

In George Orwell's 1984, the Ministry of Truth exercises absolute control of information according to The Party ethos, “Who controls the past, controls the future: who controls the present, controls the past”.

If the Ministry of Truth existed today, a more accurate slogan would be “Who controls the AI controls the past, the present and the future”.

The expeditious rise of generative AI products such as ChatGPT and Bard would be an alluring tool for the fictitious Ministry of Truth. These AI products can alter our perception of reality - presenting fiction as fact, and potentially giving biased answers and misinformation a veneer of objective truth. When introducing Bard in February, Sundar Pichai (chief executive of Google and Alphabet) described AI as “the most profound technology we are working on today”. He is undoubtedly correct.

AI is transformative technology that will change our world. It has the potential to help solve complex problems, boost productivity and efficiency, reduce human error and democratise information. The uses that have been canvassed in areas such as healthcare and education highlight the potential of this technology to significantly enhance human rights.

But it is not all upside. The risks and potential harms of generative AI products are immediate. The “Godfather of AI”, Geoffrey Hinton, shocked the world last week when he quit Google so he could “freely speak out about the risks of AI”. He has since described AI systems as posing an existential threat to humanity and has warned of it being “hard to see how you can prevent the bad actors using it for bad things”. More than 30,000 people - including giants of the tech industry such as Steve Wozniak and Elon Musk - recently signed an open letter calling for a pause on the training of advance AI systems for at least six months due to the “profound risks to society and humanity”.

We know we can continue to develop AI systems that are more complex, smarter and faster. The more important question is whether we should. Cautionary tales are now emerging with disturbing regularity. AI-informed chatbots are hallucinating and spreading misinformation, producing biased content and engaging in hate speech. The Bard web page itself acknowledges Bard is experimental and “may display inaccurate information or offensive answers”.

To the delight of “wannabe” Ministries of Truth, generative AI tools provide new opportunities to control information and rewrite the past, present and future. For those with Orwellian tendencies, generative AI is a game-changer. It will now be easier than ever to use generative AI cheaply and efficiently to run disinformation campaigns both domestically and abroad. There are numerous recent examples that highlight the growing threat posed by deepfakes and disinformation created and spread using generative AI tools. Examples include AI-generated deepfake newscasters on the fictional Wolf News outlet being used to spread pro-Chinese propaganda on social media, and a deepfake online video of Ukrainian President Volodymyr Zelensky calling on Ukrainian citizens to surrender to Russia.

Concerningly, both ChatGPT and Bard have been found to be able to write convincingly in favour of known conspiracy theories.

The co-chief executive of NewsGuard described AI as “the most powerful tool for spreading misinformation that has ever been on the internet”, and researchers from the Centre on Terrorism, Extremism, and Counter Terrorism in Monterey evaluated the risk of GPT-3 being weaponised by extremists as being both significant and likely in the absence of safeguards.

Distinguishing between fact and fiction will become increasingly difficult as AI becomes commonplace in our daily lives. Even knowing whether we are interacting with a human or a machine may become challenging.

This can have real consequences for fundamental human rights. Most immediately, it threatens our freedoms of expression and thought. With many proponents of generative AI alluding to it being the next generation of search engines, there are real concerns around responses being politically biased, peddling false information or having censorship and disinformation built into them.

The recent release of the draft Administrative Measures for Generative Artificial Intelligence Services by the Cyberspace Administration of China brings these concerns into sharp focus. These draft rules would regulate generative AI services provided to the public in mainland China. They include provisions requiring that all content produced using generative AI reflects “core socialist values”, as well as providing that all new generative AI products developed in China must undergo a security assessment through national internet regulatory departments before being released.

The central question is: How do we harness the benefits of generative AI without causing harm and undermining human rights? The answer is to insist humanity is placed at the very heart of our engagement with AI. We need to develop, deploy and use generative AI technology in responsible and ethical ways. Fundamental rights and freedoms must be protected at all stages of a product's lifespan, from concept and design through to sale and use. Some technology companies are already doing this (to varying degrees). Some governments are also engaging proactively with these questions. However, far too many governments and companies are not, instead placing these problems in the “too hard basket”.

Australia needs to be a world leader in responsible and ethical AI. The recent launch of the Responsible AI Network, which aims to uplift the practice of responsible AI across the Australian commercial sector, is one example of the type of proactive leadership needed. The Human Technology Institute's work on AI in corporate governance is another.

There are actors in the technology space advocating for a better approach to generative AI in Australia. But unless government and business are prepared to step up and show leadership, we are likely to see the risks to human rights increase exponentially.

Unless we place humanity at the heart of AI, we will see the spectre of Orwell's Ministry of Truth manifest itself across the globe - with the real risk that those who control the AI technology will end up controlling our past, our present and our future.

Lorraine Finlay