Artificial Intelligence: governance and leadership whitepaper (2019)

Foreword
The Australian Human Rights Commission and the World Economic Forum are working together to explore models of governance and leadership in artificial intelligence (AI) in Australia. This White Paper has been produced to support a consultation process on this issue.
AI can enable prediction and problem-solving approaches that save the lives of seriously ill hospital patients.[1] Yet AI can also be used to threaten human rights. For example, we have seen allegations of AI entrenching bias and discrimination in the United States (US) criminal justice system,[2] as well as in policing in Australia.[3]
Scandal and controversy connected to new technologies have increased public concern regarding decision-making that uses AI, data privacy, cyber security, political influence and labour market shifts.
Powerful new technological capabilities are rapidly increasing and are changing our world. AI, biotechnologies, neurotechnologies, new materials and distributed ledgers are being developed. As costs fall in data storage, processing and communication, innovative private sector players are particularly active in making these technologies more widespread.
AI and Machine Learning (ML)—thanks to vastly expanded data sets, purpose-designed chipsets and low-cost cloud computing—are enabling breakthroughs, including:
- new forms of communication
- medical diagnosis and treatment
- industrial and consumer robotics
- business analytics
- facial recognition
- natural language processing.
Our challenge as a nation is to ensure these technologies deliver what Australians need and want, rather than what they fear. There is added urgency because other countries are investing heavily in these areas.
Globally, we are witnessing a fundamental shift. Leaders in the technology industry are increasingly abandoning a long-standing hostility to government intervention. Many are starting to call for new forms of governance and regulation.
At the World Economic Forum’s Annual Meeting in January 2018,[4] Uber Chief Executive Officer (CEO) Dara Khosrowshahi said: ‘My ask of regulators would be to be harder in their ask of accountability.’
Similarly, Salesforce’s Marc Benioff said that ‘the point of regulators and government [is] to come in and point true north’.[5]
In April 2018, Facebook CEO Mark Zuckerberg said:
[T]he real question, as the Internet becomes more important in people’s lives, is what is the right regulation, not whether or not there should be regulation.[6]
In November 2018, Apple CEO Tim Cook stated:
We have to admit the free market is not working ... and it hasn’t worked here. I think it’s inevitable that there will be some level of regulation.[7]
But what are we looking to regulate or govern better? The new, post-digital age—the so-called ‘Fourth Industrial Revolution’—encompasses a variety of digital, biological and physical technologies. Many of those new technologies are powered, at least in part, by a variety of algorithmic techniques, described collectively as AI and ML.
To date, community concern has focused on the right to privacy: such as who owns, controls and exploits the personal data of individuals using AI-powered social media.
The potential impact of AI, including on other human rights, goes beyond privacy. For example, AI and related technologies could:
- bring radical changes in how we work, with predicted large-scale job creation and destruction and new ways of working
- transform decision-making that affects citizens’ basic rights and interests
- increase our environmental impact
- become so important in how we live that accessibility of that technology becomes an even more important human rights issue
- have a profound impact on our democratic institutions and processes.
Adopting the right governance framework is difficult, because AI technologies are complex, are applied across all sectors of the Australian community, and have enormous capacity for social good, social harm—and often both simultaneously. However, Australian stakeholders need to consider and experiment with innovative models for ensuring that the economic gains, social influence and security impact of AI is positive for all.
Former US Secretary of State Madeleine K Albright pointed out that ‘citizens are speaking to their governments using 21st century technologies, governments are listening on 20th century technology and providing 19th century solutions’.[8] If our governance solutions are so far out of step with the powerful technologies we are collectively deploying, unanticipated risks are inevitable.
Protecting Australians, while powering our future economy, requires innovation that reinforces Australia’s liberal democratic values, especially human rights, fairness and inclusion. Making this vision real is a complex task. It will involve, for example, carefully crafted laws supported by an effective regulatory framework, strong incentives that apply to the public and private sectors, and policies that enable Australians to navigate an emerging AI-powered world.
The Australian Human Rights Commissioner is currently leading a project focusing on human rights and technology. An Issues Paper, that seeks to address many of these issues, was published by the Commission in July 2018. One question the Issues Paper sought feedback on is whether Australia needs a better system of governance to harness the benefits of innovation using AI and other new technologies, while effectively addressing the threats to our human rights.
The Commission and the World Economic Forum have produced this White Paper to expand on that question posed in the Commission’s Issues Paper. Based on early analysis of data received by the Commission, this White Paper starts with the hypothesis that Australia needs to match the rising levels of innovation in AI technologies with innovation in AI governance, and focuses on the practical challenge of exploring what that might look like. The White Paper, therefore, focuses on one key question: whether Australia needs an organisation to take a central role in promoting responsible innovation in AI and related technology.
We invite you to share your perspectives.
Nicholas Davis
Head of Society and Innovation
Member of the Executive Committee
World Economic Forum
Edward Santow
Human Rights Commissioner
Australian Human Rights Commission
[1] For example: Robert Pearl, ‘Artificial Intelligence in Healthcare: Separating Reality from Hype,’ Forbes (online), 13 March 2018 <www.forbes.com/sites/sobertpearl/2018/03/13/artificial-intelligence-in-Healthcare/#c2d77d21d750>.
[2] Julia Angwin, Jeff Larson, Surya Mattu and Lauren Kirchner, 'Machine Bias: There’s Software Used Across The Country to Predict Future Criminals. And It’s Biased against Blacks,' ProPublica (online) 23 May 2016 <www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing>; Osonde Osoba and William Wesler IV, An Intelligence in Our Image: The Risks of Bias and Errors in Artificial Intelligence (RAND Corporation, 2017).
[3] Law and Safety Committee, Parliament of New South Wales, The Adequacy of Youth Diversionary Programs in New South Wales (Report 2/56, September 2018), 2.108-2.110; Vicki Sentas and Camilla Pandolfini, Policing Young People in NSW: A Study of the Suspect Targeting Management Plan (Youth Justice Coalition NSW, 2017).
[4] Stuart Lauchlan, ‘World Economic Forum 2018 – Why Trust Has to Be Valued Higher than Growth in the 4IR,’ on Diginomica Government (23 January 2018) <www.government.diginomica.com/2018/01/23/world-economic-forum-2018-trust-valued-higher-growth-4ir/>.
[5] Ibid.
[6] Arjun Kharpal, ‘Mark Zuckerberg’s Testimony: Here Are the Key Points You Need to Know’, CNBC (online), 11 April 2018 <www.cnbc.com/2018/04/11/facebook-ceo-mark-zuckerberg-testimony-key-points.html>.
[7] Brian Fung, ''We Have to Admit When the Free Market Is Not Working": Apple Chief’s Privacy Call’, Sydney Morning Herald (online), 20 November 2018 <https://www.smh.com.au/business/companies/we-have-to-admit-when-the-free-market-is-not-working-apple-chief-s-privacy-call-20181120-p50h2n.html>.
[8] Digital Forensic Research Lab, ‘We Need 21st Century Responses: Secretary Albright Speaks at #DisinfoWeek’ on Medium (30 June 2017) <www.medium.com/dfrlab/we-need-21st-century-responses-6b7eed6750a4>.