Skip to main content

Cyber-racism Symposium Report

Cyberracism Home
Cyber-racism Symposium Report
This report summarises
the key issues discussed by panellists and observers at the Cyber-racism
Symposium. The opinions expressed are those of the participants and do not
necessarily represent the position of the Human Rights and Equal Opportunity
Commission.

The Symposium participants
considered the effectiveness of existing regulation of racial vilification
and proposed various suggestions for improvement. The participants also
discussed the non-regulatory options available to address cyber-racism. [1]

Contents

1)
The impact of cyber-racism on the victims

2)
Regulatory problems in dealing with cyber-racism and suggestions for change

a)
Jurisdictional issues

i.
Overseas authors and/or Internet Service Providers

ii. Internet content scheme
iii. Criminal law

b)
Administrative issues

i.
Time and resources required by victims

ii. Identifying the proper respondent

3)
Industry capacities to respond to cyber-racism

4)
The diversity of Internet activity

i.
SPAM

ii. Chat rooms

5)
Non-regulatory responses to cyber-racism

1) The impact of cyber-racism on the victims

"Electronic
hate mail is emerging as a major concern for the Arab community in Australia.
There are numerous anti-Muslim/Arab sites on the Internet and they are
increasing on a daily basis."

This problem occurs
amid a marked increase in attacks against people of Arabic-speaking background
in Australia since 11 September 2001:

  • Women have been
    targeted and many are frightened to leave the security of their homes;
  • School children
    have been the subject of taunts and threats;
  • It is mistakenly
    thought by many that all Arabs are Muslims and all Muslims are Arabs.

Fear and concern
will cause people to act on negative feelings and there is frustration
felt by many. The Arabic community feels there is little that can be done.

"The Jewish
community has also seen an increase in threatening and abusive email…
The Jewish community is inundated with complaints and there are sites
that seek out Jews for vilification. The perpetrators are looking for
new audiences, such as through Bulletin Boards, and some organizations
appear to allow this sort of message on their Boards."

Research by the Human
Rights and Equal Opportunity Commission demonstrates that racist material
can be found in websites, computer games, emails, chat-rooms, discussion
groups and music. Background information prepared
for the Symposium provides examples of racist Internet material, including
material that has been created by people within Australia.

"If freedom
of expression is to be absolute, certain problems arise as we see with
the First Amendment in the United States. Freedom of expression is a
human right, but it is not an absolute right. There must be limitations
when there are legitimate grounds. Interference with rights must be
regulated by law. In Europe it is very clear. Freedom of expression
is not an absolute over society."

2) Regulatory problems in dealing with cyber-racism and
suggestions for change

"Any attempt
at improving the regulatory system needs to begin with a rationale:
is it to censor, punish, bring about attitudinal change, be symbolic,
deter?"

a) Jurisdictional
issues

i) Overseas
authors and/or Internet Service Providers (ISPs)

The Human Rights
and Equal Opportunity Commission administers the Racial
Discrimination Act 1975
which makes racial hatred unlawful. A
person from the group against which the offensive racist act or material
is directed can make a complaint to the Commission. But it is not possible
to apply the legislation to ISPs or individuals that are located in other
countries.

"The legislation
works well when the authors of race hate material are in Australia and
can be identified and their material is hosted by an Australian ISP.
These were the circumstances in the Toben case. But these circumstances are not common: most racist Internet authors
are overseas and/or their material is hosted by overseas ISPs."

International cooperation
should be taken into consideration.

Many European countries,
individually and/or through the Council of Europe, have made incitement
to racial hatred and dissemination of racist materials criminal offences,
including when they occur on the Internet.

Australia can rely
on these standards in dealing with sites that are created or hosted in
some European countries. It may be effective to notify the authorities
of the country where racist material originates or is hosted. That country
may be able to prosecute the case, possibly in cooperation with other
member states of the Council of Europe who have criminalised racial hatred.

There also needs
to be more interagency cooperation within Australia. The Australian Broadcasting
Authority has international networks that could assist in dealing with
cyber-racism if the material originates in some overseas countries. The
police also have international networks, though the material would have
to be of a very serious nature, perhaps relating to security issues.

Questions
and issues:

  • What international
    networks are available to Australian agencies to report racist
    material that originates in Europe?
  • European
    laws may make it easier to deal with cyber-racism in some European
    countries. What about websites that are created or hosted in the
    United States where there are not strong anti-vilification laws?

ii) Internet
content scheme

Internet content
regulation is a scheme that has gradually expanded beyond film and video
and has now come to be applied to the Internet. The current classification
code deals with sex, violence and instructions to commit crime, and does
not deal with racism.

The Australian Broadcasting
Authority (ABA) cannot investigate complaints about racist Internet content
even though the ABA is the key Internet content regulator in Australia.

"The ABA
scheme is underpinned by guidelines that apply to films and videotapes.
It would be fair to say that they are primarily, if not essentially,
concerned with violent and sexually explicit material. The threshold
is high. If it is sexually explicit, violent and racist it would be
investigated. If it is racist but is not also sexually explicit or violent
it is difficult…"

The ABA can refer
Internet material to the classification Board of the Office of Film and
Literature Classification (OFLC). The guidelines used by the OFLC to classify
Internet material are the same as those used by the ABA. The guidelines
were originally designed to regulate 'entertainment'.

"The members
[of the Classification Board within the OFLC] are not experts in racism
and work within the basic principles of the Broadcasting Services Act:
the standard of morality, decency, propriety that is expected by reasonable
adults. The system wasn't designed to remove hate or racism from any
delivery platform, including the Internet."

The Internet contains
more than 'entertainment' and the OFLC Board seeks to reflect community
standards in its classification of content. Anti-vilification laws are
a community standard. It seems desirable to have consistency in regulatory
standards so that the ABA and OFLC can assess and deal with racist content
in a way that consistent with Australian law.

The Internet content
regulatory scheme gives the Australian Broadcasting Authority the power
to order 'take down' notices to ISPs. The ABA also has links with international
voluntary hotlines and other networks. HREOC does not have these powers
or networks to deal with racist content. Could the classification guidelines
be changed so that the OFLC and the ABA can deal with racist content within
the existing Internet content framework?

"In Europe
it is easier to start a procedure and other persons can intervene and
commence a procedure. A watchdog can step in. As an organization they
may ask for an injunction."

Should there be a
pool of skilled people to identify and evaluate racist content? There
needs to be a body that advises the Australian Broadcasting Authority
if material is contrary to anti-vilification legislation; an assessing
body that isn't there to jail the perpetrator, but to make decisions on
content. The OFLC plays this role in dealing with sex and violence, so
could another body have these sorts of powers for racism? Could HREOC's
powers be changed so it could play this 'assessing' role for Internet
content?

The Australian Broadcasting
Authority would need to rely on a specialist tribunal. ISPs would also
want some confidence that there was a regulatory body that could provide
advice and judgement.

Questions
and issues:

  • If the Internet
    content scheme is intended to reflect community standards and
    values, why isn't race hate included?
  • Is it appropriate
    to regulate the Internet according to the standards applied to
    'entertainment' if the Internet is more than just an entertainment
    medium?
  • Should there
    be a specialist body that could advise the Australian Broadcasting
    Authority and ISPs if particular Internet content offends anti-vilification
    law?
  • Could HREOC's
    powers be changed so that it could make formal assessments of
    Internet content?

iii) Criminal
law

There is currently no criminal offence of racial vilification at the federal
level in Australia. There are federal criminal sanctions that can be used
in cases of violence, threats, harassment and so on. Serious racial vilification
involving a threat of violence is a crime in some States. But there are
no federal criminal laws dealing specifically with racial vilification.

Dissemination of
racist materials on the Internet has been criminalized by the Council
of Europe under the First Additional Protocol to the Cybercrime Convention.
This approach makes available criminal enforcement mechanisms, including
international co-operation on the basis of uniform criminal standards
across various countries.

It would be much
easier for Australia to use the international enforcement framework in
Europe if Australian standards on racial vilification were consistent
with those in Europe. This would also send a more uniform message about
the unacceptability of racist content.

"Criminal
law can only be used for serious issues and conduct that is harmful.
Other issues should be dealt with through civil law. Other measures
are also important in fighting racism such as education and economic
development. Self-regulation such as informal ISP networks and law enforcement
agencies working together to make the system effective. Criminal law
has a role to play, but it does not stand alone."

State criminal law
in Australia that prohibits serious racial vilification does not seem
to be effective as there have been no prosecutions under the legislation
to date. Some legislation has only recently been enacted such as the Racial
and Religious Tolerance Act
in Victoria which specifically covers
electronic communications.

"If racial
vilification is going to be a criminal offence, it has to be undertaken
by the Commonwealth. It is too difficult to undertake this at a State
level."

Advantages of introducing
federal criminal sanctions against racial vilification in Australia:

  • Cyber-racism
    is often an activity of organised race hate groups. Individual victims
    may feel too intimidated by such groups to undertake conciliation or
    civil court action against them.
  • It could provide
    consistency with European practice and therefore international enforcement
    mechanisms could be more easily used by Australian regulators.
  • Criminal law acts
    as a final sanction when all else fails.
  • Criminal law would
    impose stronger obligations on ISP's.

    "The
    existing system of legislation fails to protect the individuals. The
    Commonwealth's starting point has been about free speech and a non-criminal
    view. There has been no attempt to deal with this particular problem
    of cyber-racism... Matters are essentially handled as civil disputes,
    modelled on the basis that vilification is a 'breakdown of communication'.
    This of course is not necessarily the case."

Disadvantages of
introducing federal criminal sanctions against racial vilification:

  • It would be difficult
    to prove allegations of racial vilification to a criminal standard ('beyond
    reasonable doubt').
  • State criminal
    laws against racial vilification do not seem to have been effective
    and some of these have cumbersome procedures such as requiring special
    consent from the State Attorney-General.
  • Criminal justice
    may not be appropriate to solve some social problems.

    "The
    ordinary individual may feel censored and this could be far more intrusive
    to individuals. Short of incitement to crime, they should be able
    to say what they want."

Alternatively, the
civil regime could include stronger penalties. There are regimes where
bureaucracies have been established to 'police' with civil penalties such
as the Office of the Employment Advocate.

The problem of prosecuting
people outside Australia, and particularly in the United States, would
also remain.

"In reality,
people will not be prosecuted if they are outside Australian jurisdiction.
This would involve extradition with enormous resources required and
there are limitations, restrictions and difficulties with this. The
Australian Federal Police are currently focusing on terrorism. It is
a political question of determining priorities."

Questions
and issues:

  • Would stronger
    civil penalties improve the effectiveness of the Racial Discrimination
    Act
    ?
  • Would it
    be possible to have a 'two tiered' system of racial vilification
    laws at the federal level:
    1) the Racial Discrimination Act to deal with behaviour
    that is offensive; and
    2) a federal criminal Act to define and punish behaviour that
    threatens or incites to violence on the basis of race?
  • Does the
    system for referring violent material from the ABA to the police
    deal adequately with violent racist content?
  • Vilification
    is a criminal offence in many States. What mechanisms exist to
    refer vilificatory Internet material for investigation in these
    jurisdictions?

b) Administrative
issues

i) Time and
resources required by victims

The Racial Discrimination
Act
places the onus on the victims of racism to combat the problem.
A person lodging a complaint must be from the targeted group. Other Australians
who may find the material offensive, but who are not from the racial group
that is vilified, cannot act.

The reality is that
most victims of racism do not have the resources to pursue cases through
HREOC, and then through the courts, as happened in the Toben case. That case was possible because the complainant was supported by
the community he represented, and had the unpaid assistance of a solicitor
and barrister. Most victims of vilification suffer disadvantage and would
not be able to find the resources to do this.

ii) Identifying
the proper respondent

In legal prosecutions
it can be difficult to trace the originator of material (including emails)
or the owner of a site, even when they are located within Australia. How
is action to be taken against anonymous sites or emails? How and by whom
is the proper respondent to be located?

"Racism
on the Internet is a new problem for human rights institutions. It is
a challenging problem and it requires the development of new IT competencies
so that anti-discrimination agencies can investigate complaints effectively."

3) Industry
capacities to respond to cyber-racism

"No ISP
would want to associate themselves with racism on a web site. If there
was a formal complaint, it would be hard to imagine any ISP wanting
to keep that information. If they wanted to defend their customers it
may require seeking additional advice, but generally the organisation's
reputation is important."

The Internet Industry
Codes of Practice provide some scope to deal with racist content on the
Internet as Australian ISPs can respond to the directions of a 'relevant
authority' to remove Internet content. HREOC cannot make an assessment
of the content of a site in the way that the Australian Broadcasting Authority
can. HREOC can only investigate and attempt to conciliate complaints but
has no enforcement powers. HREOC could not order a site to be taken down.
The courts are a 'relevant authority' and could make an order for an ISP
to remove offensive content.

"The Industry
would prefer a system where material can be judged by an authorized
agency and then the ISP can be ordered to remove the material. Industry
providers don't want to be in the position of classifying content."

ISPs may be considered
a 'publisher' of the material and therefore liable for it. ISPs have a
responsibility to make sure racist content is dealt with and to send a
clear message to their customers that it is unacceptable. This is the
expectation in Europe.

Industry can assist
with investigations. There is a difficulty with pre-paid Internet accounts
as there is no physical address. However, customer and caller details
can be provided to law enforcement agencies to assist in identifying people
involved in criminal activity on the Internet. There are initiatives towards
caller-line identification (CLI) to assist police and other investigative
bodies. This may permit better identification of the authors of vilificatory
material.

ISPs are required
by the Codes of Practice to provide customers information about adhering
to Australian law. Providers also have obligations to advise customers
on how to limit access to content that they may find unsuitable. There
are online safety tools such as filters that can block racially offensive
material and ISPs have to provide advice and at cost filtering products.

Questions
and issues:

  • What are
    the liabilities of Australian ISPs which host racist material?
    Are they effectively 'publishers' of the material and therefore
    have some legal responsibility for it?
  • The courts
    do qualify as a 'relevant authority' and could make an order that
    an ISP remove a site.
  • Is the information
    provided by ISPs in investigations available only to the police
    or to other investigatory bodies as well? Will this information
    be available to anti-discrimination agencies investigating complaints?
  • Is the advice
    to ISP customers about adhering to Australian law currently effective

"Internet
industry groups are trying to do the right thing. Industry needs to
be seen to be pro-active. But monitoring everything is impossible, and
there are legal risks involved. It's a balancing act: to identify offenders,
but also to protect privacy."

4) The diversity
of Internet activity

i) SPAM

Email is currently
not regulated by uniform legislation.

Racially vilificatory
material can be distributed by unsolicited bulk email, or 'Spam'. Spam
accounts about one quarter of all emails sent globally.

"Of the
content of Spam, about one third is advertising pornography and one
third is get-rich-quick schemes. There is concern about the content
including material of a racially vilificatory nature. The use of threatening
emails is probably an offence under Australian criminal law."

ii) Chat rooms

Internet chat rooms
may contain racist content and this medium is very difficult to monitor.
Would the Racial Discrimination Act apply to chat rooms or are
they 'private' communications if they are password protected? The level
of password protection is often very shallow and in such cases the Act
should apply.

Internet service
providers can do some monitoring, for example, by scanning room names.
It would be very resource intensive and probably not possible to routinely
identify racist words inside chat rooms or bulletin boards.

Monitoring can also
be done by the public by bringing racist content to the attention of the
ISP.

"If there
is offensive material on a chat room site, the providers need people
to contact them for action. For providers to investigate, the chat room
has to be on line, and contain the offending material at the time. Monitoring
all information would make the website far too slow, if it was possible
at all."

5) Non-regulatory
responses to cyber-racism

"It is
unrealistic for legislation to deal with racism. It has to be a multi-faceted
approach, not legal sanctions alone. Racism is entrenched and education
is needed to address this issue."

The non-regulatory
responses to cyber-racism would seem to fall into a number of categories,
including technical responses, end user education, increased agency cooperation
and community action.

As with the improvement
of regulatory systems, the aim of non-regulatory approaches needs to be
determined: is it to protect individuals or families, protect society,
stop sites, stop racism?

Can cyber-racism
be eliminated by filters? Under the Broadcasting Services Act, ISPs must
provide a filter free of charge or at cost and it is part of a family
friendly policy. Consumers need to be aware of what filters provide and
make their own evaluation. They are not 100% effective, but may be about
70-80%.

There are problems
with filters as they can block out 'good' sites which promote anti-racism,
as well as blocking racist sites. There are also problems with broadband
in that text based information could be hard to recognize, as people could,
for example, change letters. Smarter filtering could and should play a
bigger part.

Organisations or
individuals can give themselves a presence on the Internet as anti-racist
advocates and educators.

"One of
the reasons racist sites are so effective is because anti-racism sites
have not been taken up by people involved in these areas. Audiences
come to sites that deliver. The Internet is an emotional as well as
intellectual tool. Racist sites work on raising the temperature of their
audiences and anti-racism sites need to cool this ardour down."

There is a need for
education in critical thinking about all media including the Internet
­ mainly in years 11 and 12 of high school. 'NetAlert' is an independent
advisory body set up by the Commonwealth Government. It has run a nationwide
program through schools and community organizations and advertises on
the television. It has a web site and provides information packages to
the public and to ISPs. HREOC and NetAlert could work more closely to
examine the opportunities for providing more anti-racism education.

The Australian Broadcasting
Authority is looking at models for training Internet users in critical
thinking. There are models in Europe, for example between the French and
Belgian education departments. This education addresses a series of issues
such as 'stranger danger' in chat rooms and how to assess the quality
of information on a web site. These sorts of models could be used to educate
about racism.

There could be a
content rating scheme. The Platform for Internet Content Selection (PICS)
is a mechanism that could be used to classify sites. Users can be alerted
or prevented from accessing sites which violate their preferences.

The system of community
'black lists' could help. Individuals might wish to nominate the content
that they consider racist and add sites to these lists. Individual computers
can also be configured to screen out content that the user does not wish
to see. It is a preference system, not a censorship system.

"There
is a critical need for more interaction between agencies. There needs
to be closer cooperation between regulators, service providers, technical
experts, educators and community groups. People getting together in
smaller, more task focused groups and looking at a wide range of strategies
at different levels."


1.
Note that quotes have been reconstructed from notes taken at the Symposium
and are not attributed.