Bills Digest No. 14, Bills Digests alphabetical index 2024-25

Communications Legislation Amendment (Combatting Misinformation and Disinformation) Bill 2024

Infrastructure, Transport, Regional Development, Communications and the Arts

Author

Nell Fraser and Jaan Murphy

Go to a section

Key points

  • The Communications Legislation Amendment (Combatting Misinformation and Disinformation) Bill 2024 aims to reduce the spread of seriously harmful misinformation and disinformation on digital communications platforms.
  • However, it is unclear if the Bill will operate in a manner compatible with Australia’s international human rights obligations related to freedom of expression. The definitions of misinformation and disinformation create some uncertainty as to the breadth of content captured.
  • The Bill introduces transparency requirements for certain digital communications platforms related to misinformation and disinformation. This includes obligations to publish information on risk management actions, media literacy plans, and complaints processes.
  • The Bill provides the Australian Communications and Media Authority (ACMA) with new powers to create digital platform rules requiring digital communication platforms to report and keep records on certain matters related to misinformation and disinformation.
  • The Bill provides ACMA with a graduated set of powers in relation to the development and registration of industry misinformation codes and misinformation standards. Registered codes and standards are enforceable.
  • The Bill has been referred to the Senate Environment and Communications Legislation Committee for inquiry, with a reporting date of 25 November 2024.
  • Both the Senate Scrutiny of Bills Committee and the Parliamentary Joint Committee on Human Rights have raised concerns with the Bill.

Introductory InfoDate of introduction: 2024-09-12

House introduced in:  House of Representatives

Portfolio: Infrastructure, Transport, Regional Development, Communications and the Arts

Commencement: Schedule 1 and Schedule 2, Part 1 commence the day after Royal Assent. Schedule 2, Part 2 commences on the later of the day after Royal Assent and 14 October 2024 (immediately after the commencement of the Administrative Review Tribunal Act 2024).

 

Purpose of the Bill

The purpose of the Communications Legislation Amendment (Combatting Misinformation and Disinformation) Bill 2024 (the Bill) is to amend the Broadcasting Services Act 1992 (BSA) and 3 other Acts to facilitate the introduction of rules, requirements, codes and standards for digital communication platform providers, which aim to reduce the spread of seriously harmful misinformation and disinformation.

The Bill:

 

Structure of the Bill

The Bill is divided into 2 Schedules. Schedule 1 adds a new Schedule to the BSA related to digital communications platforms. Part 1 outlines key definitions and concepts relevant to digital communications platforms, Part 2 establishes provisions specific to combatting misinformation and disinformation, and Part 3 outlines miscellaneous matters, including enforcement.

Schedule 2 addresses consequential amendments and transitional provisions, including amendments to the Australian Communications and Media Authority Act 2005, the Online Safety Act 2021 and the Telecommunications Act 1997.

 

Background

Concern over misinformation

Misinformation is a cause for concern in Australia and internationally. The University of Canberra’s Digital News Report: Australia 2024 found that 75% of Australians are concerned about online misinformation, up from 65% in 2016 and well above the global average of 54% (p. 19). On 1 April 2022, the Human Rights Council of the General Assembly of the United Nations adopted a resolution on the Role of States in countering the negative impact of disinformation on the enjoyment and realization of human rights. The resolution:

  • expressed concern at the far-reaching negative impacts of disinformation
  • stressed the need for responses to the spread of disinformation to comply with international human rights laws (IHRL) and promote, protect and respect individuals’ freedom of expression and freedom to seek, receive and impart information, as well as other human rights and
  • committed to the promotion of international cooperation to counter the negative impact of disinformation on the enjoyment and realisation of human rights.

History of action in Australia

The Australian Competition and Consumer Commission (ACCC’s) Digital Platforms Inquiry in 2019 (ACCC Report) highlighted the significant risks posed by increasing prevalence of misinformation and disinformation shared on digital platforms (p. 352–357, 365–372, Chapter 8.3).

Following the ACCC Report, the former Coalition Government requested the development of the voluntary (industry) Australian Code of Practice on Misinformation and Disinformation (DIGI Code). The DIGI Code was developed by the Digital Industry Group Inc. (DIGI), a peak representative body for platform companies operating in Australia, and introduced in February 2021. It was subsequently updated in December 2022 after DIGI conducted a brief period of public consultation.

The Australian Communications and Media Authority (ACMA) undertook an initial review of the DIGI Code and provided a report in June 2021. The review commended the platform signatories for taking some action to combat the spread of misinformation and disinformation (p. 44 – 45, 49). However, the review noted that Australians remained very concerned by the spread of this type of online content (p. 8) and pointed to numerous flaws with the DIGI Code (pp 48–63). The report made the following recommendations:

Recommendation 3: To incentivise greater transparency, the ACMA should be provided with formal information-gathering powers (including powers to make record keeping rules) to oversee digital platforms, including the ability to request Australia specific data on the effectiveness of measures to address misinformation and disinformation. (p. 79)

Recommendation 4: The government should provide the ACMA with reserve powers to register industry codes, enforce industry code compliance, and make standards relating to the activities of digital platforms’ corporations. These powers would provide a mechanism for further intervention if code administration arrangements prove inadequate, or the voluntary industry code fails. (p. 81)[1]

Upon releasing the report in 2022, the Morrison Government announced that it ‘welcomed all five of the recommendations made in ACMA’s report’ and that the government intended to ‘introduce legislation this year to combat harmful misinformation and disinformation online.’

Following the change of government, in January 2023, the Minister for Communications, Michelle Rowland, announced that the Albanese Government intended to introduce new legislation regarding misinformation and disinformation, in response to the recommendations.

An exposure draft of the Communications Legislation Amendment (Combatting Misinformation and Disinformation) Bill 2023 was released for public feedback on 25 June 2023. The exposure draft garnered 2,418 public submissions (available online) and approximately 20,000 comments. A number of submissions to the exposure draft consultation expressed concern about the compatibility of the Bill with the right to freedom of expression, including the Law Council of Australia (LCA, p. 10) and the Australian Human Rights Commission (AHRC, p. 3).

The Bill is a refinement of the exposure draft Bill.

 

Policy position of non-government parties and independents

The previous Coalition government intended to introduce legislation ‘to combat harmful misinformation and disinformation online.’ In April 2024, following high-profile stabbings in Sydney, Opposition Leader Peter Dutton re-iterated support for regulation of social media giants.

However, the Coalition was highly critical of the exposure draft Bill released in 2023, noting that it would ‘suppress legitimate free speech’.

The Shadow Minister David Coleman’s press release of 12 September 2024 notes that the Coalition will ‘carefully review’ the Bill. In an interview with Sky News on 19 September, Mr Coleman noted key concerns with the Bill, including its apparent coverage of opinions and the potential over-implementation of misinformation controls by social media platforms to avoid fines. The Opposition announced it would ‘strongly oppose’ the Bill in a press release on 27 September.

Pauline Hanson’s One Nation Senator Malcolm Roberts has voiced opposition to the Bill in a Senate motion regarding free speech. United Australia Party Senator Ralph Babet is also strongly opposed to the Bill.

The Australian Greens have stated that they are considering the Bill and would seek to refer the Bill to a Senate Inquiry.

 

Key issues and provisions

Industry regulation

The Bill proposes the introduction of Schedule 9 to the BSA to deal with digital communications platforms. Schedule 9 will allow the ACMA to regulate the digital platform industry primarily through industry codes, standards and digital platform rules (DPRs).

Industry codes and standards

The Bill allows sections of the digital platform industry to develop codes related to protecting the Australian community from serious harms caused by misinformation and disinformation on the platforms, and to lodge these codes with the AMCA (Subdivision C of Division 4 of Part 2 of proposed Schedule 9).

The ACMA may also request codes, and itself determine industry standards (Subdivision D of Division 4 of Part 2 of proposed Schedule 9) in certain circumstances, including the absence of an industry body or the failure of an existing code.

Once registered, a code or standard is a legislative instrument that must be complied with, and contravention may result in a civil penalty. For further detail, see the Explanatory Memorandum.

Digital Platform Rules

The ACMA may also, by legislative instrument, determine DPRs (clause 82 of proposed Schedule 9) related to risk management, media literacy plans, complaint and dispute handling, and record keeping.

The Bill also proposes various transparency obligations applicable to all digital communications platforms (clause 17 of proposed Schedule 9). For further detail, see the Explanatory Memorandum (pp. 66–75).

Freedom of expression

Australia is a party to a number of international human rights conventions that deal with freedom of expression, most notably the International Covenant on Civil and Political Rights (ICCPR). Article 19 of the ICCPR imposes on Australia obligations to protect freedom of expression, subject to Article 20.

When can freedom of expression be restricted?

Article 19(3) places strict limitations on when freedom of expression can be restricted. Any restrictions must meet the conditions of legality, necessity and proportionality, and legitimacy (para. 22):

In addition, restrictions must give effect to the presumption of freedom: they must be narrowly construed, convincingly justified and not operate in a blanket, indiscriminate manner (p. 211).

In simple terms this means freedom of expression can only be limited when necessary to protect individuals, groups or the broader Australian community from serious harm or injury. Importantly, harm does not include mere offence or insults (paras 10 and 17); ‘there is no right not to be offended’ (p. 23).

However, Australia has IHRL obligations to prohibit hate speech’, namely speech that:

  • advocates national, racial or religious hatred that constitutes incitement to hostility, violence or discrimination[2]
  • is direct and public incitement to genocide[3] or propaganda for war[4] or
  • disseminates ideas based on racial or ethnic superiority or hatred.[5]

Freedom of expression and misinformation and disinformation

In relation to regulating misinformation and disinformation consistently with IHRL, a 2022 UN Human Rights Council resolution notes:

all policies or legislation undertaken to counter disinformation must be in compliance with States’ obligations under international human rights law, including the requirement that any restrictions on freedom of expression comply with the principles of legality and necessity. (p. 3; emphasis added)

Further the UN Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression (Special Rapporteur) observed, in regard to misinformation and disinformation:

… the right to freedom of expression applies to all kinds of information and ideas, including those that may shock, offend or disturb, and irrespective of the truth or falsehood of the content.

[…]

The prohibition of false information is not in itself a legitimate aim under international human rights law.

The directness of the causal relationship between the speech and the harm, and the severity and immediacy of the harm, are key considerations in assessing whether the restriction is necessary. (para. 38, 40 and 41; emphasis added)

That is, any regulation of misinformation and disinformation must be necessary to protect individuals, groups or the broader Australian community from serious harm or injury, rather than from false information per se, with the immediacy of the potential harm being a key factor. 

Subparagraphs 47(1)(d)(iii) and (iv) and clause 54 of proposed Schedule 9 provide that misinformation codes or standards may not be registered or determined unless the ACMA is satisfied that they are:

Reasonably appropriate and adapted to achieving the purpose of providing adequate protection for the Australian community from serious harm caused or contributed to by misinformation or disinformation on the platforms; and goes no further than reasonably necessary to provide that protection.

While this appears to be an attempt to comply with Australia’s IHRL obligations, the Explanatory Memorandum states that these provisions are designed to ensure that there is no transgression of the implied freedom of political communication provided by the Australian Constitution (pp. 106-107).

Issue: defining misinformation and disinformation

The Bill aims to protect Australians from harmful misinformation and disinformation. In doing so the Bill:

  • defines misinformation and disinformation and
  • imposes a harm threshold that, in effect, requires digital communication platforms to respond to seriously harmful misinformation and disinformation in particular ways.

Definition of misinformation and disinformation

Whilst there are no universally accepted definitions of misinformation and disinformation (para. 9), clause 13 of proposed Schedule 9 defines those terms by reference to 4 elements:

  • the content:
    • contains information that is reasonably verifiable as false, misleading or deceptive
    • is provided on the digital service to one or more end users in Australia
  • the content is reasonably likely to cause or contribute to serious harm; and
  • the dissemination is not excluded dissemination.

Disinformation has an additional characteristic of:

  • having an intent to deceive or
  • being disseminated through inauthentic behaviour (discussed below). 

Issue: what information can be ‘reasonably verifiable as false’?

Central to the Bill’s definitions of misinformation and disinformation is the provision that ‘content contains information that is reasonably verifiable as false, misleading or deceptive’ (proposed paragraph 13(1)(a)). Page 44 of the Explanatory Memorandum lists ‘some matters that could be considered’ when determining whether content fits that description. One of these matters is whether the information contains expert opinions or advice.

However, as Professor Anne Twomey pointed out in her submission on the 2023 exposure draft Bill:

  • experts may take different views of certain matters and
  • consensus views may change over time (p 6).

The Special Rapporteur also notes ‘the impossibility of drawing clear lines between fact and falsehood’, and that ‘opinions, beliefs, uncertain knowledge and other forms of expression like parody and satire do not easily fall into a binary analysis of truth and falsity’ (para. 10).

In this regard the AHRC, in its submission on the 2023 exposure draft Bill, pointed out that, in the absence of a definition of ‘information’ in the Bill:

It is not necessarily clear whether ‘information’ [and thus misinformation and disinformation] is intended to be limited to factual content, and thus distinguished from other forms of content such as opinion, commentary, or creative content. Nor is it clear how this distinction will be applied in practice, given the volume of online content that involves a combination of both factual and other forms of content. It is also difficult to understand how content will be characterised when information is incomplete, more than one reasonable interpretation is open, or information is accurate at one point in time but subsequently becomes inaccurate. (p. 7)

Even if the Bill does apply to non-factual content, such as opinions, the harm threshold must be satisfied for it to amount to misinformation and disinformation (discussed below; proposed paragraph 13(1)(c)).

Exemptions for ‘reasonable dissemination’

As provided for by proposed clause 13, the dissemination of content cannot be misinformation or disinformation if is excluded dissemination. The definition of this term is provided at proposed subclause 16(1):

  • dissemination of content that would reasonably be regarded as parody or satire
  • dissemination of professional news content (defined at proposed subclause 16(2))
  • reasonable dissemination of content for any academic, artistic, scientific or religious purpose.

This carve out reflects wording in other pieces of Australian legislation such as the Racial Discrimination Act 1975, which provides exemptions for ‘anything said or done reasonably and in good faith … for any genuine academic, artistic or scientific purpose or any other genuine purpose in the public interest’ (paragraph 18D(b)). Exemptions for religious purposes are provided in other Commonwealth anti-discrimination legislation, including the Sex Discrimination Act 1984 (section 37) and the Age Discrimination Act 2004 (section 35), both of which provide exemptions for ‘an act or practice that conforms to the doctrines, tenets or beliefs of that religion or is necessary to avoid injury to the religious susceptibilities of adherents of that religion’. Likewise, some relevant state legislation also provides exemptions for various purposes, including religious ones (for examples, paragraphs 38S(2)(c), 49ZE(2)(c) and 49ZT(2)(c) of the Anti-Discrimination Act 1977 (NSW)).

The Explanatory Memorandum notes that ‘it is intended that the reasonableness standard in paragraph 16(1)(c) be interpreted similarly to the way in which Australian courts have considered the standard of reasonableness for the purposes of section 18D of the Racial Discrimination Act 1975’ (p. 63). While the definition of disinformation in the Bill includes an element of intentionality, it is interesting to note that the exemptions at proposed clause 16 do not include the ‘in good faith’ principle contained within the Racial Discrimination Act. The Bill also does not exempt dissemination of content for ‘any other genuine purpose in the public interest’, as provided in the Racial Discrimination Act.

Issue: harm threshold and types of harm

Issue: harm threshold

Under the Bill, content is only considered to be misinformation and disinformation if it is ‘reasonably likely to cause or contribute to serious harm’ (proposed paragraph 13(1)(c)).

Proposed subclause 13(3) lists factors that must be considered when determining if content is ‘reasonably likely to cause or contribute to serious harm’. The list also includes ‘any other relevant matter’ (discussed at pages 45–46 of the Explanatory Memorandum). These factors are similar to those listed in the Exposure Draft.

In this regard, the LCA concluded the factors:

repose significant discretion in an executive decision-maker, including by making judgements in respect of favoured and disfavoured ‘authors’ or ‘purposes’, without any express obligation to have regard to freedom of expression, privacy, broader human rights or any other countervailing public interest criteria. (p. 21)

The AHRC also expressed concerns about the ‘reasonably likely’ threshold:

The use of ‘reasonably likely’ and ‘contribute to’ both lower the threshold and significantly expand the potential reach of these provisions.

Content does not have to actually cause or contribute to serious harm, but simply be ‘reasonably likely to do so’. This… risks an overly-inclusive characterisation of content as being misinformation or disinformation. (p. 8)

The potential breadth of misinformation or disinformation characterised by this clause may be further exacerbated by proposed subclause 13(4), which provides that the Minister may, by legislative instrument, determine a matter to which regard must be had in determining whether the provision of content is reasonably likely to cause or contribute to serious harm. This subclause was not in the Exposure Draft.

As noted above, under IHRL misinformation and disinformation regulation is only permitted where necessary to protect people from serious harm or injury.

Issue: types of harm

Proposed clause 14 provides a multi-faceted definition of ‘serious harm’ with 6 discrete categories of harm:

  • election interference
  • harming public health, including preventative health measures
  • vilification of individuals or groups with certain characteristics
  • intentionally physically injuring an individual
  • imminent damage to critical infrastructure or disruption to emergency services and
  • imminent harm to the Australian economy.

Importantly, to qualify as ‘serious harm’ all the above must have either:

  • significant and far-reaching consequences for the Australian community or a segment of the Australian community or
  • severe consequences for an individual in Australia.

Whilst this appears to meet the IHRL requirement of the necessary severity of the harm (para. 40 and 41), some of the types of harm included in the definition of serious harm raise issues regarding their compatibility with the right to freedom of expression under IHRL. Under IHRL, restrictions to freedom of expression are permitted only on the grounds listed at Article 19(3) of the ICCPR.

Issue: immediacy of harm

Under IHRL the immediacy of potential harmfulness of content is a key factor in determining if a restriction of freedom of expression is necessary (para 41). In this regard, all the harms without the ‘imminent’ element have the potential to impermissibly restrict freedom of expression if applied to misinformation and disinformation that:

Issue: harm to public health

IHRL allows freedom of expression to be restricted when necessary to protect public health.

In this respect, the Bill defines harm as including:

Harm to public health in Australia, including to the efficacy of preventative health measures in Australia (proposed paragraph 14(b)).

The Explanatory Memorandum further elaborates at pages 48–50, referring to what the World Health Organisation termed an ‘infodemic’ during the COVID-19 pandemic. The Special Rapporteur’s report on Disease pandemics and the freedom of opinion and expression also discusses the complexity of information sharing during the COVID-19 pandemic.

The Explanatory Memorandum notes:

misinformation and disinformation that might [constitute the defined harm] could relate to how a disease is spread, the safety and effectiveness of vaccines or other preventive health measures, or health treatment options not supported by clinical data. (p. 48)

However, experts can disagree on matters related to public health. For example, from early in the COVID-19 pandemic, experts disagreed about measures that had significant impacts on people’s everyday lives. There were changes, reversals and confusion in expert advice and substantial differences in public health measures across Australia and internationally.

As such, as pointed out by Professor Anne Twomey in her submission on the 2023 exposure draft Bill (p. 6), it is not clear, when differing expert opinion or conflicting scientific evidence exists, how the falsity or correctness of information will be able to be determined.

Issue: meaning of vilification

Vilification is included as a type of harm but is not defined by the Bill or the BSA.

Existing Commonwealth, state and territory anti-vilification legislation differ substantially in their definition and treatment of ‘vilification’, meaning the term, when used generally, ‘lacks precise meaning’ (pp. 324, 326). For example, some Australian legislation doesn’t require incitement to physical violence or other harms, and/or includes terms such as ‘contempt’ ‘ridicule’, ‘insults’, and ‘offends’ (pp. 324–333). This raises concerns about their compatibility with freedom of expression under IHRL (pp. 335–338), where ‘expression may only be prohibited where it “clearly amounts to incitement to hatred or discrimination”’ (para. 17; emphasis added).

Whilst the meaning of ‘vilification’ in the Bill would be a matter for the courts to determine, there appears to be a risk that the Bill will capture ‘lawful but awful’ misinformation and disinformation, and thus potentially impermissibly restrict freedom of expression (p. 11).

Issue: harm to the Australian economy

The Bill defines harm as including:

Imminent harm to the Australian economy, including harm to public confidence in the banking system or financial markets. (proposed paragraph 14(f)).

This definition differs from the definition in the Exposure Draft – ‘economic or financial harm to Australians, the Australian economy or a sector of the Australian economy’ – with a notable new requirement that the harm be ‘imminent’ and the removal of harm to individuals.

The Explanatory Memorandum elaborates on this category of harm at pages 56–57, including:

Examples of online content that could cause or contribute to this type of harm could include false or misleading content about the financial health of a corporation, aimed at manipulating stock prices; or false or misleading content warning about the financial health of a financial institution, which if disseminated at a certain scale, could provoke a ‘digital bank run’.

In their submissions to the Senate inquiry into the Bill, multiple stakeholders have expressed concern that this category of harm remains overly broad. The Victorian Bar argues that the definition does not include any requirement for harm to actually manifest in economic loss (p. 9), while the AHRC advocates that the paragraph be removed from the Bill as it may censor legitimate discussions (such as those ‘criticising a major company’s environmental or human rights record or policies’) that may ‘unfavourably affect market trends or corporate reputations’ (p. 5).

Issue: disinformation, intent and inauthentic dissemination

Under proposed paragraph 13(2)(e) disinformation is distinguished from misinformation by whether or not:

  • there are grounds to suspect that the person disseminating the content intends for it to deceive another person or
  • the dissemination involves inauthentic behaviour.

Some Australian academics suggests that a distinction between misinformation and disinformation should be removed from the Bill and ‘is not necessary or helpful’ as there may be no difference in harm caused by both types of content (p. 4). Further, it is difficult to determine the intent behind a person’s actions (p. 21). However, reports from the Special Rapporteur suggest that despite the challenges in determining intent, under IHRL it remains important to divide the two concepts.[6] The Explanatory Memorandum discusses factors that may be used to judge whether a piece of information is disinformation at pages 45–46.

The Bill provides a definition of inauthentic behaviour at proposed clause 15. This includes where the dissemination uses an automated system (for example, an algorithm or bot network) in a way that is reasonably likely to mislead an end user about a matter such as the identity, purpose or origin of the person disseminating the content; or there are ‘grounds to suspect’ that the dissemination is part of a coordinated action that is reasonably likely to mislead an end-user on a number of matters relating to the content’s integrity.

For the purposes of the Bill, this distinction is significant as proposed subclause 67(1) implies that an approved misinformation code or a misinformation standard could require a digital communications platform provider to:

  • remove content that is disinformation that involves inauthentic behaviour and
  • prevent an end-user from using a platform where the end-user is engaged in disinformation that involves inauthentic behaviour.

This does not apply to misinformation. The registered codes and standards may also treat content containing misinformation and disinformation in different ways.

Issue: scope of ACMA’s powers and enforcement provisions

ACMA’s power to approve Codes and to monitor non-compliance

While the Bill only provides the ACMA with the authority to make misinformation standards in certain circumstances where there is a lack, or failing, of an industry-developed misinformation code (proposed clauses 55–59), the ACMA has power to choose whether or not to register a code (proposed clause 47). This level of oversight is similar to other areas of media regulation, for example, the role of the eSafety Commissioner in the registration of industry standards under the Online Safety Act 2021 (Division 7 of Part 9).

As mentioned above, the ACMA does not have total discretionary power in registering codes and standards. Proposed subparagraphs 47(1)(d)(iii) and (iv) and section 54 of the Bill provide that misinformation codes or standards may not be registered or determined unless the ACMA is satisfied that they are:

Reasonably appropriate and adapted to achieving the purpose of providing adequate protection for the Australian community from serious harm caused or contributed to by misinformation or disinformation on the platforms; and goes no further than reasonably necessary to provide that protection.

If the ACMA was thought to be registering codes or standards in breach of this subsection, they could be challenged on grounds of ultra vires.[7]

The ACMA is also afforded the power to pursue enforcement action for contravention of misinformation codes (proposed clauses 52, 53, 74 and 75). While the Explanatory Memorandum notes that ‘in practical terms, digital communications platform providers will need to identify misinformation or disinformation themselves’ (p. 44), the AHRC, in its submission to the 2023 exposure draft Bill, questioned the role of the ACMA in this assessment, arguing that:

while the ACMA is not provided with the power to directly regulate individual pieces of content, it is difficult to see how it could exercise its [compliance powers]… without itself making content assessments and deciding whether individual pieces of content fall within the legislative definitions of misinformation and disinformation. In practice, the Exposure Draft ultimately gives ACMA powers to regulate digital content, and to impose significant fines on digital platforms if it determines that they are not doing enough to stop misinformation or disinformation. (p. 10)

Research and policy organisation Reset.Tech, however, suggests that ACMA does not have sufficient power, arguing that ‘ACMA should be immediately empowered to bypass industry codes and set a standard’ (p. 9).

ACMA can compel ‘other people’ to provide information and documents

As the Explanatory Memorandum stresses, the Bill does not consider individual pieces of content nor does it put obligations on individual persons – rather the focus is on the systems and processes (p. 6). However, proposed clause 34 explicitly concerns individual people, and proposed subclause 34(7) creates a civil penalty provision that could concern an individual person.

In brief, under proposed clause 34, the ACMA may compel a person to provide information or documents if the ACMA has reasonable grounds to believe that the person has information or a document relevant to misinformation or disinformation on a digital communications platform, or how a platform handles misinformation or disinformation. Proposed subclause 34(2) specifies that these provisions cannot be applied in relation to content posted by the person, other than in specific circumstances related to professional work for a digital platform provider. While the Minister’s second reading speech implies that this exempts individuals from the ACMA’s information-gathering powers, except in the specific circumstances noted at proposed subclause 34(2), the wording of the Bill suggests that this exemption is only in relation to content ‘posted by the person’, not information posted by another person.  

Issue: when does private messaging become ‘public’?

The Bill includes broad exemptions for private messages from platforms’ regulation of misinformation and disinformation, with some exceptions (see proposed clause 45, and proposed paragraphs 30(3)(a), 33(3)(a) and 34(4)(a)).

The definition of private message –at proposed clause 2 – provides that a message sent using a digital communications platform by an end-user to another end-user or a number of end-users ceases to be a private message when the number of recipients exceeds a certain number. By default, this number is set at 1,000 end-users. This provision is not surprising, given the recent reporting on the role of messaging services in disseminating misinformation and disinformation to large groups, where it is reasonable to assume that the message is meant for wide broadcast rather than private communication. The default number of 1,000 end users will have immediate implications for popular messaging services such as WhatsApp and Telegram which allow large numbers of end-users to join private chat groups (1,024 and 200,000 respectively) and which also employ end-to-end encryption. It is unclear whether platforms will still be required to be accountable for content in group messages with 1,000 people when encryption is in use. There has been extensive debate on this issue in regards to child sexual abuse content online.

The definition of private message also allows another number to be specified in the digital platform rules (set by the ACMA through disallowable instrument). This could ostensibly allow the ACMA to set the number as low as 2.

Issue: maximum penalties and over-enforcement

Similar to other pieces of communications legislation, under the Bill, the ACMA has a number of enforcement options available when a person contravenes certain provisions. In these circumstances, the Bill provides the ACMA with the option to either seek a civil penalty or provide a written direction requiring the provider to take specified action directed towards ensuring that the provider does not contravene the provision in the future. Division 1 of Part 3 of proposed Schedule 9 to the BSA outlines further enforcement mechanisms.

Pecuniary penalties payable by persons in respect of civil penalties are provided at proposed subsection 205F(5E) of the BSA, at item 20 of Schedule 2 to the Bill. Penalties are substantial for contravention of civil penalty provisions in respect to misinformation codes and misinformation standards, and may be up to:

  • the greater of 10,000 penalty units and 2% of the annual turnover of a body corporate for non-compliance with a misinformation code[8] and
  • the greater of 25,000 penalty units and 5% of the annual turnover of a body corporate  for non-compliance with a misinformation standard.

For individuals, the maximum penalties would be 2,000 penalty units or 5,000 penalty units (respectively).

The LCA noted in its submission to the 2023 exposure draft Bill:

The Guidance Notes state that the intention of the significant maximum penalties is to ‘deter systemic non-compliance by digital platform services and reflects the serious large scale social, economic and/or environmental harms and consequences that could result from the spread of misinformation or disinformation’.

However, the Law Council is aware of concerns that these penalties may lead to digital platform services becoming overly careful in censoring content on their platform to limit their risk of receiving (potentially significant) fines or other penalties. (link and emphasis added; p. 13)

This issue was highlighted as a key theme in the comments received on the exposure draft Bill.

Issue: difficulties with extra-territorial enforcement

Clause 3 of proposed Schedule 9 to the BSA states:

This Schedule extends to acts, omissions, matters and things outside Australia.

Extra-territorial application is necessary for the Bill to have any practical application, given that the vast majority of digital communications platform providers are not based in Australia. However, it does raise questions over the practicality of this enforcement.

For example, the recent federal court case between the eSafety Commissioner and X over an order for X to remove Class 1 violent material on the platform under the Online Safety Act highlights the difficulty of ‘jurisdictional issues in cyberspace’. Ultimately, this case ended in the eSafety Commissioner dropping the case, despite concerns that X’s geo-blocking of tweets in Australia did not amount to taking all reasonable steps to prevent Australians accessing the tweets. Elon Musk, the Executive Chairman of X, has expressed strong opposition to the Bill.

The Australian Strategic Policy Institute has recommended that the ACMA should ‘require all large digital platforms to have an Australian corporate presence… [to] give the public and government more options to work with platforms to counter mis- and disinformation’ (p. 6).