Chapter 4 - Challenges in the online referendum debate

Chapter 4Challenges in the online referendum debate

Introduction

4.1The preceding chapters have focused on the conduct of the referendum in terms of official information given to the public, and foreign actors who might seek to interfere in the referendum. This chapter considers challenges arising in the civil debate itself, and particularly the online referendum debate.

4.2Critical to any successful electoral event and the proper conduct of a civil debate in a liberal democracy is the dissemination of factual, relevant, and reliable information to the voting public. As discussed in chapter two, the official ‘Yes’ and ‘No’ case pamphlet distributed to every Australian ahead of the referendum will be a vital piece of information for voters. However, this is just one source of information on which Australian voters will be able to base their decision.

4.3The Australian public increasingly accesses information and news websites via digital platforms.[1] Further, it appears that substantial numbers of people are concerned about the quality of news and journalism they are consuming.[2]

4.4While the use of propaganda and the spread of misinformation is not a new trend during electoral events, a mechanism to spread misinformation so effectively has never existed before. Social media in the 21st century has dramatically increased the risks of misinformation and disinformation proliferating in online debate.

4.5This chapter discusses the challenges of administering the online referendum debate, and the harm that may arise from the proliferation of misinformation. It begins by summarising the current regulatory arrangements. Next, the key issues arising from misinformation are outlined, namely the manipulation of community discourse and understanding of the referendum question, and the harassment of Aboriginal and Torres Strait Islander people. The chapter finishes by outlining gaps in the regulatory framework followed by some suggestions for improvement.

Current regulatory arrangements

4.6Conduct on social media platforms is regulated by government agencies, empowered by relevant legislation, and co-regulatory and self-regulatory schemes. The following section outlines some of the measures currently in operation.

Australian Electoral Commission

4.7During electoral periods, the Australian Electoral Commission (AEC) undertakes some actions to address online content, such as:

enforcement of the Commonwealth Electoral Act 1918 and Referendum (Machinery Provisions) Act 1984, which requires electoral/referendum communications to be authorised by the publisher;

administration of the ‘Stop and Consider’ campaign which encourages people to consider the source of electoral advertising and material; and

maintaining a ‘Disinformation Register’, which actively monitors and reports upon factually incorrect electoral related social media content.[3]

eSafety Commissioner

4.8The eSafety Commissioner (eSafety) is Australia's independent regulator for online safety. eSafety's legislative functions are provided for by the Online Safety Act 2021, which include:

coordinating online safety activities across the Australian Government;

supporting and conducting educational and community awareness programs;

administering regulatory complaints and investigations schemes, including cyberbullying of children, cyber abuse of adults, the non-consensual sharing of intimate images, and illegal or restricted online content; and

regulation of social media platforms' broader systems and processes.[4]

4.9Online material is considered cyber abuse by eSafety if it is:

…posted on a service such as a social media service, if it targets a particular Australian adult, if it is intended to cause serious harm, and if it is menacing, harassing or offensive in all the circumstances.[5]

Australian Communications and Media Authority

4.10The Australian Communications and Media Authority (ACMA) is the independent statutory authority responsible for the regulation of broadcasting, radiocommunications and telecommunications in Australia. Many of ACMA’s functions and powers are conferred by the Broadcasting Services Act 1992 (BSA Act). The BSA Act sets out the regulatory environment for the traditional television and radio broadcasting industry in Australia.

4.11However, ACMA's remit also includes aspects of online content regulation. The ACMA ‘monitors digital platforms' activities under the voluntary Australian Code of Practice on Disinformation and Misinformation (the ACPDM), including in response to coordinated campaigns by foreign actors’.[6]

4.12In January 2023, the Government announced it will consult on new legislation that would provide the ACMA with new regulatory powers to formally oversee the activities of digital platforms with respect to mis- and disinformation. Proposed new powers for the ACMA include:

formal information-gathering powers (including powers to make record keeping rules) to oversee digital platforms (for the purposes of incentivising greater transparency), including the ability to request certain data on the effectiveness of measures to address disinformation and misinformation; and

powers to register an enforceable industry code and a standard (should industry self-regulation measures prove insufficient in addressing the threat posed by misinformation and disinformation). This graduated set of powers includes measures to protect Australians, such as stronger tools to empower users to identify and report relevant cases.[7]

Australian Code of Practice on Disinformation and Misinformation

4.13Commencing in 2021, the Australian Code of Practice on Disinformation and Misinformation is a voluntary code of conduct designed to reduce the risk of online misinformation causing harm to Australians. The ACPDM contains a range of best practice commitments by signatory platforms that are targeted at protecting Australians participating in democratic policy making processes from harmful misinformation and disinformation such as voter fraud, voter interference, and voting misinformation, while also ensuring that signatories give due regard to human rights such as the need to protect citizens freedom of speech. The code currently has eight signatories: Adobe, Apple, Google, Meta, Microsoft, Redbubble, TikTok and Twitter.[8]

ACMA review

4.14In June 2021, ACMA published its report on the adequacy of digital platforms’ disinformation and news quality measures. It found that the scope of the ACPDM is limited by the threshold where both ‘serious’ and 'imminent’ harm must be reached before action is required:

The effect of this is that signatories could comply with the code without having to take any action on the type of information which can, over time, contribute to a range of chronic harms, such as reductions in community cohesion and a lessening of trust in public institutions.[9]

4.15ACMA also recommended that:

the ACPDM should include private messaging due to increasing concern about the propagation of disinformation and misinformation through these services, particularly when used to broadcast to large groups;

a clear and transparent measurement framework be developed; and

that ACMA undertake an additional review on the ACPDM by the end of the 2022–23 financial year as the code administration framework was not yet completed.[10]

Platform policies

4.16Each of the major social media platforms have internal policies or guidelines regarding what standard or content is accepted on its platform.

4.17For example, Meta provided the committee with an outline of its policies on misinformation. Meta’s policies on misinformation state that it will:

remove misinformation that could directly contribute to the risk of imminent physical harm, interference with the functioning of political processes, and certain highly deceptive manipulated media; and

reduce the spread of misinformation that is identified and verified as false by independent third-party fact-checkers.[11]

4.18These policies are guided by Meta’s Community Standards, which outline what content is and is not allowed on Meta’s services.[12]

4.19To increase political, electoral and social issue ad transparency, Meta requires all advertisers of social issue, electoral and political ads in Australia to complete a number of steps to confirm their authenticity and enhance transparency.[13] Advertisers of such ads must provide identification and be authorised by Meta prior to running the ad. Advertisers must also include a ‘paid for by’ disclaimer on their ad, and have their ads stored in Meta’s publicly available Ad Library for seven years, even if the page that posted them is no longer operational.[14]

Independent third-party fact-checking

4.20Some social media platforms, such as Meta, have commercial agreements with independent third-party fact-checking (3PFC) organisations. These organisations ‘identify, review and rate viral misinformation’.[15]

4.21Meta submitted that it has agreements with ‘90 fact checking partners covering more than 60 languages’. It provided a description of how 3PFC functions:

An Australian user will see a warning label on content that has been fact-checked by an international factchecking partner. Content found to be false by our international fact-checking partners will be demoted in an Australian user’s Feed, meaning there is less chance of them seeing it.

Once a third-party fact-checking partner rates a post as ‘false’, we apply a warning label that indicates it is false and shows a debunking article from the fact-checker. It is not possible to see the content without clicking past the warning label.[16]

Key issues arising from misinformation

4.22Submitters and witnesses pointed to two issues that may be exacerbated by online misinformation:

manipulation of community discourse and understanding of the referendum question; and

the harassment of Aboriginal and Torres Strait Islander people.

4.23The following section outlines these concerns.

Manipulation of the referendum question

4.24The Australia Institute contended that the referendum topic was at real risk of becoming subject to misinformation:

Exaggerated claims have been a feature of many earlier referendums. Constitutional experts warn that misinformation could distort how people vote and factchecking has already found some claims made in relation to the Voice were incorrect or misleading. Vitriolic comments are already being anticipated, given the ugly and hyperbolic statements made by some opponents of the same-sex marriage plebiscite. The impact of misinformation will be amplified on social media.[17]

4.25The RMIT Factlab stated it had already identified an increase in misinformation and disinformation relating to the referendum, spread through multiple forms of content. It provided a list of several ‘Voice-related debunks’ that it had discovered online thus far in the referendum campaign.[18]

4.26Ms Dawkins, Executive Director of Reset.Tech Australia argued that the increased use of social media to access news has resulted in ‘intense audience fragmentation’—that is, the separation of audience groups from the mass audience due to specialised, personalised content created because of social media applications. She warned that audience fragmentation amplified ‘extraneous or extreme views which distort from this essential question of 'yes' or 'no'’ and separate the public from a common understanding of the world.[19]

4.27RMIT Factlab described this phenomenon as ‘information disorder’ and considered it inextricably linked to the ease at which ‘information can be uploaded, shared and reshared on social media’.[20]

4.28Dr Shumi Akhatar, Associate Professor at the University of Sydney, submitted that in the context of the referendum ‘there is no limit to rumours and false and misleading information’.[21]

Online abuse of Aboriginal and Torres Strait Islanders

4.29The potential for misinformation to create online space for racial abuse emerged as a key concern in evidence to the inquiry. Mr Toby Dagg, Acting Chief Operating Officer, eSafety Commissioner told the committee that there had been an increase in racially abusive material online:

We have seen some uptick in complaints that centre on material posted relevant to the Voice that seeks to denigrate or insult or threaten or otherwise abuse those who identify as Aboriginal and Torres Strait Islander and some of those who are expressing support for either position, either yes or no.[22]

4.30Mr Dagg noted that the abuse eSafety had observed was primarily directed at individuals in their own capacity, rather than the position on the Voice the individual is maintaining:

Part of that, I think, is because some of the quality of debates online is coarsening. It seems to me, based on some of what I see during complaint review and some of what I notice myself on social media, that people tend to be a lot more ready to employ personal ad hominem attacks and threats in order to make a point online.[23]

4.31A similar increase in online abuse toward Aboriginal and Torres Strait Islander people is observed ‘reliably’ each year during the annual AFL Indigenous Round. The eSafety Commissioner submitted data quantifying this increase:

An average of 5 per cent of all cyber abuse complaints to the eSafety Commissioner are made by Indigenous Australian adults.

During the 2022 AFL Indigenous Round, 27 May to 3 June, 13.2per cent of all adult cyber abuse complaints were made by Indigenous Australian adults.[24]

4.32Similarly, the Australian Muslim Advocacy Network (AMAN) submitted that it had observed an increase in campaigns aimed at ‘dehumanising and demonising’ Indigenous people in the lead up to the referendum.[25]

4.33AMAN stressed the importance of such rhetoric being stamped out, stating:

When dehumanising discourse becomes more socially acceptable, discrimination, disrespect and violence toward Indigenous people also becomes more acceptable: ‘dehumanisation moves out-group members into a social category in which conventional moral restraints on how people can be treated do not seem to apply.[26]

4.34However, in terms of reducing the presence of abusive material, the eSafety Commissioner’s powers are ‘remedial in nature’ and ‘not designed to achieve system change’.[27] For example, the informal assistance that the eSafety Commissioner provided to the AFL during the Indigenous Round events in 2020 and 2021 consisted mainly of making informal removal requests to social media providers in cases where eSafety believed specific material may breach relevant terms of service.[28]

4.35Mr Dagg stated that, while it is not in eSafety’s remit to ‘generally patrol’ for instances of abusive content, it was monitoring complaints data related to the Voice referendum.[29]

Gaps in the regulatory framework and consequences

4.36In its 2019 Digital Platforms Inquiry, the Australian Competition and Consumer Commission (ACCC) found that almost none of the regulations and codes that apply to traditional media (that is, newspapers, television, and radio) apply to the social media platforms. These include journalistic codes of ethics, broadcasting licensing conditions, telecommunications regulations and co-regulatory schemes.[30]

4.37This evidence about gaps in the regulatory framework was put to the committee by multiple witnesses.

4.38For example, Ms Dawkins stated that traditional media is subject to independent oversight, but the digital platforms, ‘who act as powerful digital media distributors’ are not.[31]

4.39AMAN noted:

the Broadcasting Services Act 1992 does not capture online material;

the Online Safety Act 2021 only addresses cyberbullying and abuse directed at individuals, not groups based on race; and

the Australian Code of Practice on Misinformation and Disinformation is self-regulatory and has an unclear enforcement mechanism.[32]

4.40Reset.Tech argued that the ACPDM is inadequate, firstly because its ‘industry-led drafting creates sub-standard levels of protection’ and, secondly, because of its voluntary nature.[33]

4.41The RMIT Factlab commented that there is no evidence to suggest that the ACPDM has generated any significant reduction in the rate of the spread of misinformation.[34]

4.42The Australia Institute’s Centre for Responsible Technology argued that this is due to signatories to the ACPDM having ‘failed to take the meaningful and material actions that would properly address the severity and influence of mis- and disinformation’.[35]

4.43This means that the social media platforms’ adherence to the Code, or other regulatory schemes, is heavily reliant on internal organisational circumstances, as opposed to externally enforced obligations. Any pivot away from internal safety policies or measures may have the effect of amplifying mis- and disinformation. The decline of Twitter’s internal trust and safety team was highlighted as a recent example of the failure to prevent harm:

These teams are crucial for mitigating risk and preventing harm in environments such as referenda and the campaign. Again, this is the issue with self-imposed regulation. We are incredibly reliant on platforms' cautionary investments. We are, in Twitter's example, seeing the opposite of a cautionary investment.[36]

4.44Further, the self-regulatory nature of the ACPDM, and the framework itself, has led to significant variation between the operation of the major platforms.[37] Ms Jabri Markwell observed:

There's a huge amount of variation between the different platforms and what they're doing…I think some companies are keen for regulation because they see it as creating a more even playing field for those companies that really are doing nothing and those companies that are doing something.[38]

4.45Ms Dawkins pointed out that most of the harm-reduction features that have been rolled out on Twitter and Facebook are the result of external pressure.[39]

4.46Ms Jabri Markwell from AMAN reasoned that the gaps caused the regulatory system to be ‘reactive’. She explained that while the social media companies adhere to Australian hate speech legal standards, they will only do so ‘when a community or a person brings a legal action under our vilification and discrimination laws and has a court finding against that content’. Further, most targeted communities are not collectively organised to have resources dedicated to monitoring and compiling complaints to tech companies.[40]

4.47In sum both AMAN and Reset.Tech Australia argued that the current regulatory framework for social media platforms is failing to prevent the proliferation of misinformation and online abuse because the framework is ‘neither comprehensive nor rigorous enough to address the threats posed by electoral mis and disinformation, including threats likely to emerge in the upcoming Voice referendum’.[41]

Suggestions to improve the regulatory framework

4.48The committee received some suggestions for more comprehensive regulation.

4.49Reset.Tech argued that broader regulatory requirements are ‘urgently required to hold platforms accountable for exhibiting misinformation and disinformation, as well as requirements for transparency to enable effective independent oversight’. Ms Dawkins recommended proactive platform engagement in the short term. However, in the medium to long term, she suggested a comprehensive framework like the EU’s Digital Services Act was required to raise standards.[42]

4.50The EU’s Digital Services Act (DSA) entered into force on November 16, 2022. It places transparency and accountability obligations on platforms who disseminate misinformation and disinformation. It creates specific obligations to:

conduct a systematic risk assessment of their platform at least once a year (for Very Large Online Platforms). Very large online platforms will have to mitigate against these risks, or face action from regulators;

requirements around transparency and to allow ‘vetted’ independent researchers to access data. This should additionally allow independent verification of platform's risk assessments; and

enable user appeals, through an internal complaints mechanism and an additional out-of-court settlement process.[43]

4.51Ms Kelly Mudford, Manager, Content and Platform Projects, Australian Communications and Media Authority, stated that whilst the aspects of the DSA are promising such as mandatory compliance and auditing requirements, it is too early to comment upon its effectiveness.[44]

4.52AMAN argued that regulation was needed to address the public harm of dehumanising speech and discourse enabled by social media companies and certain news outlets.[45] In terms of proactive steps to address abusive material online, Ms Jabri Markwell suggested:

I do think we need to look at asking social media companies to grant researchers access to the data to evaluate how their algorithms are ranking content, because we know from everything else that has been published that their algorithms tend to rank the lowest quality content. Therefore, initiatives such as fact-checker initiatives et cetera are piecemeal solutions to a much bigger, systemic problem, which we could shed some light on through granting researcher access to data, and that could even possibly be considered as part of legislation.[46]

Footnotes

[1]See, for example, Australia Competition & Consumer Commission, Digital Platforms Inquiry, Final Report, June 2019, p. 55.

[2]See, for example, Australia Competition & Consumer Commission, Digital Platforms Inquiry, Final Report, June 2019, p. 355.

[3]Australian Electoral Commission, Submission 3, p. 1.

[4]eSafety Commissioner, Submission 10 to the Senate Select Committee on Foreign Interference through Social Media, p. 1.

[5]Senate Select Committee on Foreign Interference through Social-Media, eSafety Commissioner, Submission 10, p. 1.

[6]Australian Communications and Media Authority, Submission 6, to the Senate Select Committee on Foreign Interference through Social Media, p. 1.

[7]The Hon Michelle Rowland MP, Minister for Communications, ‘New ACMA powers to combat harmful online misinformation and disinformation, Media Release, 20 January 2023.

[8]Digital Industry Group Inc, Australian Code of Practice on Disinformation and Misinformation, https://digi.org.au/disinformation-code/ (accessed 2 June 2023).

[9]Australian Communications and Media Authority, Report on digital platform measures, June 2021, p. 3.

[10]Australian Communications and Media Authority, Report on digital platform measures, June 2021, pp. 3–4.

[11]Meta, Submission 13, pp. 14–19.

[12]Meta, Community Standards, https://www.facebook.com/communitystandards(accessed 29 May 2023).

[13]Josh Machin, ‘Expanding transparency around social issue ads in Australia’, Meta Australia Blog, 18 June 2021, https://medium.com/meta-australia-policy-blog/expanding-transparency-around-social-issue-ads-inaustralia- c71f8e26d407 (accessed 29 May 2023).

[15]Meta, Meta's Third-Party Fact-Checking Program, 2023, https://www.facebook.com/formedia/mjp/programs/third-party-fact-checking (accessed 2 June 2023).

[16]Meta, Submission 13, p. 17.

[17]The Australia Institute, Submission 18, pp. 1–2.

[18]RMIT Factlab, Submission 17, pp. [1–2].

[19]Ms Alice Dawkins, Reset.Tech Australia, Proof Committee Hansard, 4 May 2023, p. 12.

[20]RMIT Factlab, Submission 17, p. [3].

[21]Dr Shumi Akhtar, Submission 6, p. 2.

[22]Mr Toby Dagg, Acting Chief Operating Officer, eSafety Commissioner, Proof Committee Hansard, 4 May 2023, p. 32.

[23]Mr Toby Dagg, Acting Chief Operating Officer, eSafety Commissioner, Proof Committee Hansard, 4 May 2023, p. 32.

[24]eSafety Commissioner, answers to questions taken on notice, 4 May 2023 (received 18 May 2023).

[25] Australian Muslim Advocacy Network, Submission 19, p. 4.

[26]Australian Muslim Advocacy Network, Submission 19, p. 4; AMAN’s working definitions for this harm followed a review of genocide prevention research, social psychological research, international case law and its own observations of discourse online, as outlined at Attachment A of its submission.

[27]Mr Toby Dagg, Acting Chief Operating Officer, eSafety Commissioner, Proof Committee Hansard, 4 May 2023, p. 32.

[28]eSafety Commissioner, answers to questions taken on notice, 4 May 2023 (received 18 May 2023).

[29]Mr Toby Dagg, Acting Chief Operating Officer, eSafety Commissioner, Proof Committee Hansard, 4 May 2023, pp. 31 & 33.

[30]Australian Consumer and Competition Commission, Digital Platforms Inquiry, Final report, June 2019, p. 15.

[31]Ms Alice Dawkins, Reset.Tech Australia, Proof Committee Hansard, 4 May 2023, p. 12.

[32]Australian Muslim Advocacy Network, Submission 19, p. 6.

[33]Reset.Tech Australia, Submission 4, p. 2.

[34]RMIT Factlab, Submission 17, pp. [4].

[35]RMIT Factlab, Submission 17, pp. [1–2].

[36]Ms Alice Dawkins, Executive Director, Reset.Tech Australia, Proof Committee Hansard, 4 May 2023, p. 13.

[37]The Australian Competition and Consumer Commission defined self-regulation as follows: ‘self-regulation refers to when an industry sets its own standards of conduct and is supervised by an industry body representing the interests of its members’.

[38]Ms Rita Jabri Markwell, Adviser, Australian Muslim Advocacy Network, Proof Committee Hansard, 10 May 2023.

[39]Ms Alice Dawkins, Executive Director, Reset.Tech Australia, Proof Committee Hansard, 4 May 2023, p. 13.

[40]Ms Rita Jabri Markwell, Adviser, Australian, Australian Muslim Advocacy Network, Proof Committee Hansard, 10 May 2023, p. 9.

[41]Reset.Tech Australia, Submission 4, p. ii; Australian Muslim Advocacy Network, Submission 19, p.6.

[42]Ms Alice Dawkins, Executive Director, Reset.Tech Australia, Proof Committee Hansard, 4 May 2023, p. 14; see also Dr Shumi Akhtar, Submission 6, p. 2.

[43]Reset.Tech Australia, Submission 4, p. 4.

[44]Ms Kelly Mudford, Manager, Content and Platform Projects, Australian Communications and Media Authority, Proof Committee Hansard, p. 19.

[45]Ms Rita Jabri Markwell, Adviser, Australian Muslim Advocacy Network, Proof Committee Hansard, 10 May 2023, p. 11.

[46]Ms Rita Jabri Markwell, Adviser, Australian Muslim Advocacy Network, Proof Committee Hansard, 10 May 2023, p. 11.