Skip to main content

Safeguarding the Right to Privacy

Technology and Human Rights
Laptop being used

The Australian Human Rights Commission's (Commission) submission has been successful in influencing the government's response to the Privacy Act 1988 (Cth) (Privacy Act), with sixteen of the Commission's recommendations being adopted either in full or in part.

Protection of personal information

The Privacy Act protects personal information. In an age where digital participation is essential, users are pressured to either agree to data collection or be excluded from digital spaces.

Digital participation should not come at the expense of privacy. The human right to privacy must be a core obligation of the Privacy Act.

Collection, use and disclosure of personal information 

Privacy policies and collection notices are often complex and legalistic. This undermines an individual's understanding of how their data will be handled.

Where individuals cannot ascertain what data is being collected and for what purposes, organisations can evade accountability and use information that an individual would not otherwise have agreed to share.

Having clear, up-to-date, and understandable collection notices is essential in protecting against the misuse of information.

Consent  

Consent is fundamental to the concept of privacy.

To improve the quality of consent to collect personal information, such consent must be voluntary, informed, current, specific, and unambiguous.

Guidance from the Office of the Australian Information Commissioner (OAIC) will clarify how these requirements can be factored into online consent requests.

Facial recognition technology

Facial recognition technology will lead to considerable reductions in personal privacy.

The threat of close monitoring by police and government agencies can impede participation in lawful democratic processes, raising the risk profile in protecting other rights and freedoms.

Automated decision-making

Automated decision-making (ADM) poses considerable risks to individuals, especially where the decisions will have a legal or similarly significant effect on an individual's rights.

Two critical risks raised by the development of ADM tools are automation bias and algorithmic bias.

Automation bias refers to the tendency to be overly reliant on the outcomes produced by ADM rather than exercising actual discretion in decision-making.

Algorithmic bias refers to where an ADM tool produces outputs that result in unfairness.

These risks may be mitigated by adopting a human rights-centred approach to designing and deploying ADM tools.