4. Policing the Trolls

Legal Protections and Government Action on Online Safety

Introduction

4.1
Australia is at the forefront in legislative responses to online harm. With the introduction of the Online Safety Act 2021 (OSA), in addition to a suite of other measures which have been introduced over the past decade, Australian law has provided a strong model for legislatures around the world in regulating online harms.
4.2
This chapter outlines the key elements of Australia’s online safety legislative regime and the associated Government policy measures, before considering international responses to the challenge of regulating online environments.
4.3
Regardless of these achievements, the online environment does not stand still for governments to create regulation. Developments in technology and platforms occur at a dizzying pace, and the present regulatory environment may not fit a digital world within even a few short months. This chapter also considers potential areas where Australia’s regulatory framework may require further change to appropriately address online safety concerns.

Legislative framework governing online safety

4.4
This section outlines the recent history of legislative powers in relation to online safety. It examines the activities and powers of the Office of the eSafety Commissioner, including the programs it conducts (such as educational tools and reporting schemes). It also considers the portfolio division of responsibility of these powers.

Legislative powers pre-2021

4.5
Prior to 2015, Australia’s primary legislation governing online safety was the Broadcasting Services Amendment (Online Services) Act 1999.
4.6
In 2015, the Australian Government enacted the world-first Enhancing Online Safety Act 2015 (EOS Act), which established the role of the eSafety Commissioner (first named the Children’s eSafety Commissioner) and the Office of the eSafety Commissioner (eSafety).
4.7
eSafety’s powers and functions were further expanded by the 2017 amendment of the EOS Act, particularly in relation to the increase in the eSafety Commissioner’s remit to cover all Australians.1 A civil penalties scheme was established in 2018 that enabled eSafety to assist with the removal of intimate images or videos from online platforms, including in some cases to take action against the perpetrator, social media services, websites or hosting providers.2
4.8
eSafety has additional powers and functions in relation to certain categories and forms of content under other legislation, such as:
the Broadcasting Services Act 1992 (schedules 5 and 7);
the Telecommunications Act 1997 (section 581(2A));
the Enhancing Online Safety (Protecting Australians from Terrorist or Violent Criminal Material) Legislative Rule 2019 (section 5); and
the Criminal Code Act 1995 (sections 474.35 and 474.36).

Online Safety Act 2021

4.9
The EOS Act was superseded by the Online Safety Act 2021 (OSA). The OSA was passed by the Parliament in June 2021 and came into force on 23 January 2022.
4.10
The intention of the OSA, according to Infrastructure, is that ‘the rules and protections Australians enjoy offline should also apply online … and the Act provides a safety net for people when things go wrong online’.3
4.11
The OSA creates or updates reporting schemes to address cyber-bullying, adult cyber abuse, image-based abuse, illegal and restricted online content, and abhorrent violent content. 4 The OSA gives eSafety powers to require the removal of content reported under these schemes, with non-compliance attracting civil penalties directed to the poster of such material and provider of the service hosting the material. 5
4.12
In addition, the OSA gives eSafety new information-gathering and investigative powers to assist in the administration of the schemes. It also permits eSafety to make Restricted Access System declarations, to control access to certain materials based on age.6
4.13
The OSA also has a mandatory independent review of its operation, set for three years after its commencement (23 January 2025). eSafety will also report on the operation of its schemes in its annual report.7

Reporting functions

4.14
The new legislation introduces or updates a number of reporting functions and schemes administered by eSafety, which are discussed below in detail.

Adult Cyber Abuse reporting scheme

4.15
The OSA establishes a new cyber abuse reporting scheme for adults, designed for Australians over the age of 18 years to report harmful conduct across a range of platforms. 8 The scheme requires that online service providers and/or individual users to remove cyber abuse targeting Australian adults ‘with the intention of causing serious harm’. 9 The legislation stipulates that the threshold for what constitutes ‘cyber abuse’ is high and refers to the most serious forms of abuse, requiring two elements:
1
The abuse must be intended to cause ‘serious harm’, which means serious physical harm or serious harm to a person’s mental health - like threats intended to cause serious psychological harm or serious distress that goes beyond ordinary fear
2
The abuse must also be menacing, harassing or offensive in all the circumstances.10
4.16
eSafety provided the following examples of abuse sufficient in meeting the adult scheme’s threshold:
publishing private or identifying information about an individual with malicious intent to cause serious harm; encouraging violence against a specific Australian adult based on their religion, race or sexuality; and threats of violence that make a person afraid they will suffer physical harm.11
4.17
It was noted that the threshold for adult cyber abuse is significantly higher than the threshold applied to the children’s cyberbullying scheme. eSafety explained that this is due to the ‘expectation that adults are generally more resilient than children’, and to ensure the thresholds are consistent with those contained in the Criminal Code.12 It did, however, note that it would monitor whether the legislative definition should be changed given the concerns about the high threshold.13
4.18
eSafety estimates that, since the commencement of the OSA and with investigations incomplete, less than 10 per cent of complaints meet the required threshold.14 Further, as at 15 February 2022, eSafety had received over 200 complaints in relation to serious adult cyber abuse which met the requisite threshold.15 Nonetheless, eSafety stated that it would deliver advice and guidance to all people seeking assistance with cyber abuse regardless of whether complaints met the threshold or not.16
4.19
Further, eSafety advised that certain topics, such as hate speech directed at groups as opposed to individuals, are beyond the scope of the new adult reporting scheme. It did confirm, however, that in assessing adult cyber abuse reports that eSafety would consider factors such as whether abuse was targeted at the individual in question on the basis of race or cultural background.17

Children’s cyberbullying scheme

4.20
The cyberbullying scheme focused on children has also been updated in the OSA, reflecting the range of services outside of social media services where cyber bullying can occur. 18 eSafety explained that:
whereas the previous scheme was limited to 14 specific social media services across two ‘tiers’ – one voluntary and one mandatory – the enhanced scheme will apply to all social media services, as well as a range of other services where cyberbullying can occur, such as messaging and gaming services.19

Image-based abuse reporting scheme

4.21
The image-based abuse scheme was also updated to enable eSafety to ‘rapidly address the non-consensual sharing of intimate images’.20 The pre-2021 scheme, first commenced in September 2018, enabled eSafety to investigate and act in response to complaints regarding the sharing or threatened sharing of intimate images without consent. 21
4.22
The OSA provides minor but significant changes, such as:
The introduction of a reduced time period (now set as 24 hours for all schemes) for online service providers to respond to a removal notice from the eSafety Commissioner, and
New powers for eSafety to utilise a discretion to publicly name any service providers who consistently fail to manage online harms such as image-based abuse. 22

Basic Online Safety Expectations

4.23
A key feature of the OSA is its creation of Basic Online Safety Expectations (BOSE). Under the BOSE, more responsibility will be placed on social media and internet companies to provide a safe environment for their users.23 The BOSE consists of:
Core expectations, which are already laid out in the OSA itself;
Additional expectations, which may be specified by the Minister via legislative instrument; and
Reporting requirements, which will be imposed on social media and internet companies.24
4.24
Infrastructure stated that the BOSE was intended to set ‘minimum safety expectations of online service providers, establishing a benchmark for online service providers to take proactive steps to protect the community from abusive conduct and harmful content online’.25 Further, the BOSE establishes that:
Providers of these services are expected to take steps to meet the Expectations included in the Determination and protect Australians from unlawful and harmful material and activity that falls within the remit of the enabling legislation the Online Safety Act 2021 (the Act), or impedes the online safety of Australians.26
4.25
As authorised under section 45 of the OSA, the BOSE was established on 20 January 2022 as a legislative instrument. The Online Safety (Basic Online Safety Expectations) Determination 2022 (the BOSE Determination) sets out six key areas of expectations for social media and online platforms, which are divided into core expectations and additional expectations. Table 4.1 provides a breakdown of the BOSE Determination’s expectations.
4.26
The eSafety Commissioner, Ms Julie Inman Grant, stated that the OSA’s establishment of the BOSE would assist in shifting the burden of responsibility of online safety towards social media platforms:
The act is raising the bar on what government expects of the tech industry by introducing a basic set of online safety expectations, or the BOSE. Mandatory industry codes will also require the online industry to detect and remove illegal material, while preventing access to harmful content. Unlike [eSafety’s] reporting schemes, the BOSE are not limited to specific forms of online harm and may enable us to shine a light on systemic failings and will compel transparency that is currently lacking, tackling issues such as online hate, self-harm content and the extent to which algorithms contribute to harm.27
Table 4.1:  Basic Online Safety Expectations - breakdown of expectations by category
Division topic
Expectation/s or core expectation/s on providers
Additional expectation/s on providers
Safe use
Reasonable steps required to ensure safe use
Consultation with the eSafety Commissioner and reference to eSafety’s guidance in determining reasonable steps
Reasonable steps required regarding encrypted services
Reasonable steps required regarding anonymous accounts
Consultation and cooperation with other providers in promoting online safety
Treatment of certain content and activity
Reasonable steps required to minimise provision of certain content28
Reasonable steps required to prevent access of class 2 material to children29
-
Reports and complaints regarding certain content30
Provision of mechanisms to report and make complaints regarding certain content
Provision of mechanisms to report and make complaints regarding breaches of terms of use
Provision of terms of use, certain policies, etc.
Provision of accessible information on how to make reports or complaints to the eSafety Commissioner
Expectations regarding making certain information accessible
-
Provision of accessible information on terms of use, policies and complaints, etc.
Provision of updates about changes in policies, terms and conditions, etc.
Record-keeping
-
Records kept regarding certain matters
Dealings with eSafety Commissioner
Provision of requested information to eSafety Commissioner
Implementation of a designated contact point
Source: Online Safety (Basic Online Safety Expectations) Determination 2022, 20 January 2022, available at: https://www.legislation.gov.au/Details/F2022L00062
4.27
Compliance with the BOSE hinges on eSafety’s powers to require reporting from platforms in relation to how they are meeting expectations, which is bolstered by a civil penalties scheme in addition to other enforcement mechanisms. eSafety also has the power to publish statements in relation to how particular services are meeting expectations.31

Additional industry codes or standards

4.28
The OSA provides for the registration of new and improved industry codes or standards, requiring the Australian digital industry to take measures to address online safety. eSafety argued that the codes or standards will establish a ‘regime of modernised industry codes or standards, expanded to additional sections of the online industry to make sure the whole digital ecosystem is playing its part’.32
4.29
These codes and standards are to address how online platforms can assist users to manage and limit access to harmful content, particularly in relation to Class 1 and Class 2 forms of content under the Online Content Scheme.33
4.30
The codes and standards are to apply to eight key forms of actors in the digital industry, which will include providers of content such as social media and other online communication services, internet and hosting providers, manufacturers and suppliers of internet-related equipment, and providers who install and maintain such equipment.34
4.31
A key feature of the codes and standards is that they are mandatory, with responsibility lying with eSafety in directing compliance. Non-compliance with a direction from eSafety to comply with these codes or standards may attract a civil penalty of 500 penalty points (approximately $111,000 for individuals and $555,000 for organisations).35
4.32
Figure 4.1 provides a breakdown of how eSafety expects the regulation of Class 1 and Class 2 material to operate. Moreover, Figure 4.1 demonstrates the shift from a self-regulatory regime towards a co-regulatory framework and – where necessary – utilising a harder formal approach.

Figure 4.1:  Regulation of Class 1 and Class 2 Material under the Online Safety Act 2021

Source: eSafety Commissioner, Development of industry codes under the Online Safety Act – Position Paper, September 2021, available at: https://www.esafety.gov.au/sites/default/files/2021-09/eSafety%20Industry%20Codes%20Position%20Paper.pdf (accessed 11 February 2022), p. 37.
4.33
The eSafety Commissioner explained how these codes were developed during consultations:
The approach that we took to these codes, just to remind you what these codes do, is that they require eight subsectors of the online industry to proactively detect and remove class 1 content and then restrict access to class 2 content. What we decided to do in consultation with the industry was to say, 'You can take the legislation and what's in the executive memorandum and come up with a set of codes and then we can decide whether we'll register them.' We tried to do some of the heavy lifting by putting four months into helping demystify and come up with a paper that had very clear outcomes and the risks and harms that we were seeking the companies to prevent, and they then had to come back to us.36
4.34
The eSafety Commissioner also made clear that if she was not satisfied with the codes developed by industry actors, she was not obliged to register them and could move towards the implementation of a mandatory standard to cover the industry at large.37
4.35
The OSA requires that ‘reasonable efforts’ should be made to ensure that an industry code is registered and in place within six months of the commencement of the Act (prior to 23 July 2022). If a code cannot be agreed to, an industry standard should be registered within 12 months of commencement (23 January 2023).38 The Digital Industry Group Inc (DIGI) advised the Committee that it was drafting parts of the codes relating to social media, search engines and app distribution services.39

Other legislative measures

4.36
Legislative measures outside of the OSA and related regulation tend to focus primarily on law enforcement powers and counter-terrorism measures. Within the last decade, Australia's security and law enforcement agencies have gained extensive powers to investigate and disrupt criminal acts that occur online or which are facilitated by digital technologies.
4.37
Three notable recent expansions of law enforcement powers include:
The Surveillance Legislation Amendment (Identify and Disrupt) Act 2021 (the SLAID Act)40;
The Telecommunications and Other Legislation Amendment (Assistance and Access) Act 2018 (the Assistance and Access Act)41; and
The Telecommunications Legislation Amendment (International Production Orders) Act 2021 (the IPO Act).42
4.38
The Assistance and Access Act grants security agencies the power to require telecommunications providers to build or use capabilities to give law enforcement targeted access to online data. Further, the Act gives law enforcement and security agencies powers to:
Covertly access ('hack') devices;
Search devices such as laptops, mobile phones and USBs, and collect information; and
Conceal the fact that a device has been accessed.43
4.39
The SLAID Act permits Government agencies to take covert control of online accounts (for example, email or social media accounts), to 'add, copy, alter and delete data in computers', and to collect information from devices that are used, or likely to be used, by the subject of a warrant.44
4.40
The IPO Act is intended to improve Australian law enforcement agencies’ access to data held overseas by foreign-based companies who operate in Australia. The Department of Home Affairs ( argued that the IPO Act will assist in the investigation of a range of technology-dependent offences including ransomware attacks, child sexual exploitation and abuse, and serious and organised crime.45
4.41
A number of other reforms by the Australian Government have been undertaken in the last five years which address aspects of online safety. These include:
‘Carly’s Law’ criminalising acts online to prepare or plan to harm, or engage in sexual activity, with a person under 16 years of age;46
New criminal offences for ‘grooming’ a third party and for facilitating dealings with child abuse material, as well as increased penalties and presumptive minimum sentences for child sex offences;47
Legislation restricting online gambling promotion;48
New civil and criminal penalties to address the non-consensual sharing of intimate images;49
New offences directed at internet service providers (including hosts or content providers) for failure to report or remove live or streaming violent content, enacted in the wake of the 2019 Christchurch terrorist attack;50 and
The introduction of the Australian Code of Practice on Disinformation and Misinformation, which is a voluntary code which commits member companies to reduce the risk of online misinformation.51

Social Media (Anti-Trolling) Bill 2021 and defamation reform

4.42
The Australian Government has introduced a bill to enable Australians to ‘unmask’ anonymous trolls who post defamatory material online.
4.43
The Social Media (Anti-Trolling) Bill 2022 (the Bill) is designed to address the findings of the High Court of Australia in the Voller decision. The Court held that owners of webpages or social media accounts can be held liable for defamatory third-party comments on public websites such as social media posts.52
4.44
The Bill seeks to clarify that social media page owners are not ‘publishers’ for defamatory material posted on their page by third parties, thereby protecting them from defamation liability. The Bill also provides for victims of defamation to identify anonymous users who post defamatory material by either engaging with a complaints mechanism with the provider in order to obtain the originator’s contact details, or through an ‘end-user information disclosure order’ from a court.53
4.45
Concurrent with this inquiry, the Attorney-General’s Department (AGD) conducted a public consultation into the exposure draft of the Bill.54 The Bill was subsequently introduced in the House of Representatives on 10 February 2022.55 The Senate Legal and Constitutional Affairs Legislation Committee was referred the provisions of the Bill on 10 February 2022, and is due to report its findings to the Senate on 24 March 2022.
4.46
Given this, the Committee did not enquire in depth into the proposed anti-trolling legislation. However a broad range of evidence was gathered during the course of the public hearings and submissions regarding online harm caused by trolling that may fall below the threshold of the adult cyber-abuse scheme. Such abuse includes but is not limited to, posts by anonymous trolls making damaging, defamatory remarks online.
4.47
The provisions in this Bill aim to assist victims of online, anonymous trolling with the recourse to unmask their abusers. The Bill also aims to incentivise social media platforms to provide identifying information to victims of online defamation.
4.48
As Ms Tanya Hosch, Executive General Manager of the Australian Football League stated:
I think the biggest difficulty is the fake accounts and the ability of people to set up online accounts without having to reveal their own personal identities. It means it can often be incredibly hard to trace these people. We put our reports of this material through our integrity department, which is staffed almost entirely, I think, by former police who’ve got significant experience in investigation work. They work closely with our social media team, and they will also reach out and liaise with the eSafety Commissioner. But that combination of effort not only is incredibly time-consuming and cumbersome but frequently leads us to a dead end because of the identities not being known to us. Even if they are members of an AFL club, we won’t necessarily know that. Sometimes they will repeat the behaviour towards particular players or individuals through various identities that they set up. That is a constant frustration.56
4.49
In addition to the Bill, a process is underway to review the Model Defamation Provisions in state and territory laws. The reform process, which was initiated by the Council of Attorneys-General in 2018, is being led by NSW, and is currently determining how to address the question of internet intermediary liability in defamation for the publication of third-party content.57

Online privacy law reform

4.50
The Australian Government is currently planning to overhaul privacy law in relation to online services. It has proposed to strengthen the Privacy Act 1988 (Privacy Act) by passing the Privacy Legislation Amendment (Enhancing Online Privacy and Other Measures) Bill 2021 (the Online Privacy Bill). According to the AGD, the Online Privacy Bill will introduce a binding online privacy code for social media and other platforms, in addition to increasing penalties for breaches and enforcement measures.58 The Online Privacy Bill would also provide for the development of an online privacy code aimed at social media, data brokerage, and other large digital platforms. The Information Commissioner and Privacy Commissioner, Ms Angelene Falk, stated that this would ‘require them to be more transparent about how they handle personal information with more stringent requirements and privacy rules for children’.59
4.51
The Australian Government is also conducting a Privacy Act Review (the Review), with a particular focus on privacy concerns in relation to social media and other digital platforms. This Review was opened for public consultation, and submissions on its Discussion Paper closed on 10 January 2022.60 The Review will consider issues such as:
The scope and application of the Act, including in relation to what constitutes ‘personal information’, existing exemptions, and situations where the collection, use and disclosure of personal information is permitted;
The current Act’s protections in relation to personal information and whether it provides an appropriate framework for promoting strong privacy practices;
Potential powers for individuals to have direct rights of action to enforce privacy obligations;
The potential introduction of a statutory tort for serious invasions of privacy; and
Whether the notifiable data breach scheme is effective and its impact.61
4.52
Both the Online Safety Bill and the Review were said to be strongly focused on empowering users with oversight of what happens to their data online, particularly in relation to children’s data. AGD stated that:
With the reforms we are engaging in—the Online Privacy Bill and the broader review of the Privacy Act—a large theme through those pieces of work is providing greater transparency to individuals so that they know how their personal information is being used, and, specifically in relation to the Online Privacy Bill, what protections should be in place particularly to protect the privacy of children. There are specific protections for children in the Online Privacy Bill, including things like parental consent to the use of children's personal information by social media platforms, as well as some additional protections. In addition to social media platforms needing to consider how notice should be given to children specifically—and that might be different to how notice is given to adults—an onus needs to be put on social media platforms to consider whether the use of that child's personal information is fair and reasonable.62
4.53
Social media companies raised concerns that the intended reforms, including the implementation of an online privacy code before the conclusion of the Review, could result in inconsistencies in the processes. In response, Ms Falk stated that the legislative response was intended to address pressing concerns but also to indicate where wider reform would be required.63

Online safety policy and programs

4.54
A large part of Australia’s governance of social media and online safety is overseen by eSafety, with a broad legislative framework and input from other agencies supporting its work. eSafety is provided administrative support by the Australian Communications and Media Authority (ACMA).
4.55
Policy matters primarily are the responsibility of the Department of Infrastructure, Transport, Regional Development and Communications (Infrastructure). Other agencies have responsibilities for aspects of maintaining online safety, such as the Home Affairs, the AGD, and others.

eSafety programs

4.56
eSafety has responsibility for a number of key policy tools that address online safety, such as:
World-first schemes to assist victims in reporting online bullying and harassment, including two separate schemes for children and adults;
A reporting tool for Australians who identify illegal and harmful content online, such as child sexual abuse, depiction or promotion of gratuitous crime and violence, and terrorism-related content;
An online portal and reporting tool addressing image-based abuse;
A research and education program to develop knowledge about online safety matters and provide safety awareness training to the broader community; and
Facilitating programs that identify and provide assistance to people most at risk of online harm, including those with low digital literacy skills, Aboriginal and Torres Strait Islander peoples, and people who are not able to understand English.64

Online Safety Charter

4.57
The Australian Government established an Online Safety Charter (the Charter), released in December 2019, as part of the Keeping Our Children Safe Online Package. The Charter ‘articulates a set of community-led expectations for industry to protect citizens, especially children and vulnerable members of the community, from harmful online experiences’.65
4.58
The Charter sets out several expectations of internet service providers and digital products, including the importance of safety principles, protections and processes incorporated in their design and operation.66 The Charter is designed to apply to multiple types of digital products, including social media and networking services, content hosts, gaming providers and app developers.67

Additional measures across multiple portfolios

4.59
Responsibility for managing online safety is distributed across a range of Australian Government agencies. Departments, agencies and statutory bodies with a substantial online safety role include:
Infrastructure;
Home Affairs, including the Australian Federal Police (AFP);
AGD;
the Department of Foreign Affairs and Trade;
ACMA (which includes eSafety);
the Australian Electoral Commission (AEC);
the Australian Centre to Counter Child Exploitation; and
the Office of the Australian Information Commissioner (ACCCE).
4.60
Further, some departments oversee policy or service areas that contain matters in relation to online safety, including the Department of Health, the Department of Social Services, the Department of Education, Skills and Employment, and others.
4.61
Given the fragmented oversight of different aspects of online safety, coordinating activity is necessary to achieve positive policy outcomes. Infrastructure listed a number of interdepartmental committees and working groups of which it is a part. These include:
the Agency Heads Committee on Online Safety, an online safety focused group comprised of heads of Commonwealth departments and agencies;
the eSafety Advisory Committee, chaired by the eSafety Commissioner and includes representatives from industry, government, civil society and academia;
the ACCCE Prevention Awareness Working Group, which is led by the AFP;
the Preventing Terrorist and Violent Extremist Exploitation of the Internet committee, which is led by Home Affairs; and
the Electoral Integrity Assurance Taskforce, led by the AEC.68
4.62
Other measures adopted by the Australian Government include:
An online safety measures package to boost women’s safety online, including funding pilot technology to detect image-based abuse content posted online;
Additional resourcing for frontline workers dealing with victims of technology-facilitated abuse targeting women and children; and
The Be Connected program, which targets older Australians in improving technological literacy, build confidence in using technology and maintaining safety online.

International jurisdictions and online safety

4.63
Jurisdictions around the world have been forced to examine the issue of online safety in their legislative systems. Reset Australia provided a breakdown of the comparative legislative frameworks across the European Union, Canada, Germany, the United Kingdom (UK), Ireland and Australia, provided at Figure 4.2.

Figure 4.2:  Comparative approaches to types addressing harms through regulation

Source: Reset Australia, Submission 12, pp 12-13.

The United Kingdom

4.64
The UK shares many of the same experiences and concerns with respect to online safety as Australia, and has been considering how its regulatory environment should be updated to ensure that social media and internet companies begin to meet the community’s safety expectations.
4.65
An online harms White Paper was published in April 2019 highlighting a similar range of problems to those dealt with in this inquiry. The key recommendation of the white paper was the introduction of a duty of care for internet companies, including social media and content-sharing platforms.69
4.66
Draft legislation was introduced in May 2021, and a Joint Committee on the Draft Online Safety Bill (the JCDOSB) was established in June of that year to scrutinise the Bill.
4.67
The JCDOSB heard that there are three key features to the UK’s proposed approach to online safety:
1
It is systemic, in that platform operators are expected to have processes in place to identify, assess and mitigate hazards caused by their products
2
It is flexible, in that it applies to operators of all kinds and sizes, and
3
It is future-proof, in that it is a framework the technical details of which can be updated over time.70
4.68
Baroness Beeban Kidron, a member of the JCDOSB, said that the end goal of the United Kingdom’s reforms was to create an online environment:
in which the companies are actually being put under the same basic product safety approach that the rest of business is put under … [A]ny other company has to provide a product that is fit for human consumption. We do not allow cars to be put on the road with no brakes. This is a sector which is a car with no brakes.71
4.69
In addition to the draft Online Safety Bill, the UK has passed legislation to create an Age-Appropriate Design Code. This is a statutory code of practice covering services 'normally provided for remuneration, at a distance, by electronic means and at the individual request of a recipient of services' which are likely to be accessed by children.
4.70
The Code includes 15 standards which should inform the design of digital services. The standards require platforms to employ child-centred design principles, and measures like data protection, age appropriateness, transparency, child-friendly defaults, and parental controls, among others.72

The European Union

4.71
The European Union (EU) and its shared institutions, including the European Council, European Parliament and the European Commission, have established a range of regulations in relation to online safety. As stated by the Centre for Digital Wellbeing (CDW), the EU is:
striving to be global role model for the digital economy and internationally promote its digital standards. As such a large and influential market, their regulation of social media contributes to the setting of global norms.73
4.72
The EU’s regulatory model is underpinned by the proposed Digital Markets Act and Digital Services Act, which will require approval from the European Council and the European Parliament prior to their implementation and subsequent binding on member states. The Digital Markets Act is designed to address matters such as privacy concerns, data sharing, and advertising based on personal data.74 The Digital Services Act is focused on online safety matters and the protection of rights and privacy of users. The CDW provided an outline of its core proposals:
This proposal tackles core operations of platforms, namely how information is prioritised and presented on its online interface. Significant online platforms (with more than 45 million end-users, or an equivalent of 10% of the European Union population) would be required to ensure recipients are appropriately informed of the information presented to them. The Act defines the responsibilities of digital services providers, specifically online platforms, social media, and online marketplaces. Further, it outlines obligations and procedures to tackle illegal content and disinformation, and offers the opportunity to challenge content moderation decisions. The proposal introduces safeguards protecting fundamental rights, allowing citizens to freely express themselves while maintaining rights to effective remedies, non-discrimination, the rights of the child, and personal data and privacy protection.75
4.73
On 20 January 2022, Members of the European Parliament agreed to the draft measures contained in the Digital Services Act, which is now with the European Council for approval.76 The European Council agreed to the proposed measures in the Digital Markets Act on 25 November 2021.77
4.74
EU Member States have taken additional measures individually, including:
Germany, which introduced the Network Enforcement Act in 2017 to address hate speech and misinformation online;
Austria, which established the Communications Platforms Act in 2021 which is aimed at hate speech, harassment and false information on online platforms;
Sweden, with the development of a handbook for communicators in public administration that addresses misinformation campaigns;
Spain, which has implemented a digital transformation policy over five years in addition to adopting a digital charter of rights;
Denmark, which has adopted a range of measures such as legislation inspired by the Australian News Media and Digital Platforms Mandatory Bargaining Code in relation to advertising, in addition to the adoption of the EU Code of Practice on Disinformation and increased digital literacy education for adults and children; and
France, with the adoption of laws in relation to disinformation and misinformation, particularly during election periods.78

New Zealand

4.75
The New Zealand model of regulatory response to online safety is similar to that adopted by the Australian Government.
4.76
The primary legislative instrument regulating online safety in New Zealand is the Harmful Digital Communications Act 2015 (HDC Act). According to the New Zealand Ministry of Justice (Tāhū o te Ture), the HDC Act is designed to ‘prevent and reduce the impact of cyberbullying and other modern forms of harassment and intimidation’, such as criminalising the sending of messages and/or posting material online designed to deliberately cause serious emotional distress to a victim.79 As at 9 March 2020, the HDC Act has resulted in:
148 criminal charges filed relating to 115 people
100 criminal charges finalised relating to 79 people
66 convicted and sentenced
25 withdrawn
9 other outcomes, including diversions completed and dismissals.80
4.77
The New Zealand Parliament (Pāremata Aotearoa) recently amended the HDC Act via the Harmful Digital Communications (Unauthorised Posting of Intimate Visual Recording) Amendment Bill (the HDC Bill). The HDC Bill focuses primarily on image-based abuse and non-consensual sharing of intimate images. The HDC Bill was passed by the Parliament in March 2022 and is now enforceable.81
4.78
The HDC Act also established NetSafe as an ‘approved agency’ to assess, investigate and manage complaints in relation to online safety, in addition to the introduction of a civil court process for serious or repeated offences. As at 9 March 2020, the courts had received 14 civil cases requesting Harmful Digital Communications Orders, six of which were completed.82
4.79
NetSafe, a similar body to Australia’s Office of the eSafety Commissioner, is a not-for-profit independent organisation, and works with the New Zealand Government in relation to online safety, including education and research. NetSafe also operates a reporting service, where people can report matters such as fraud, privacy issues, and online bullying or harassment.83 NetSafe and the New Zealand Ministry of Justice also provide information for parents and organisations (such as schools) on online safety guidelines and strategies. The New Zealand Department of Internal Affairs (Te Tari Taiwhenua) also has responsibility for matters in relation to online child exploitation, governed by the Films, Videos and Publications Classification Act 1993.84
4.80
NetSafe is currently leading the development of the Aotearoa New Zealand Code of Practice for Online Safety and Harms, the voluntary industry code to establish a self-regulatory framework for the digital industry and agree on certain principles and commitments. Companies such as Meta, Google, Microsoft, TikTok, Twitch and Twitter have been involved in the drafting of the new code, which as at 22 February 2022 was open to public comment.85

United States of America

4.81
The United States of America (the US, the United States) has a number of legislative protections, particularly in relation to privacy. The following federal legislation is in place which governs aspects of online safety:
The Electronic Communications Privacy Act, enabling the US Government to access digital communications and tracking technology with a subpoena;
The Computer Fraud and Abuse Act, which criminalised the access and sharing of protected information;
The Communications Decency Act, which provides extensive immunity from liability for publishers and users of online platforms; and
The Children’s Online Privacy Protection Act, in effect from 1998, and its associated Rule, requiring that websites collecting information in relation to children under the age of 13 years of age to comply with the Federal Trade Commission.86
4.82
Importantly, as the Australian Law Reform Commission has noted, there are no privacy protections at federal law aimed at adults in the United States.87 Restrictions on harmful content are limited in United States legislation due to the limitations placed on such attempts by the First Amendment of the US Constitution. Multiple Supreme Court cases have struck down regulation in favour of less restrictive methods of moderation (such as utilising filtering technology).88
4.83
Further regulation nonetheless continues to be pursued. There have been a number of Congressional and Senate hearings in recent years, including the appearance of Ms Frances Haugen at a hearing by the Senate Committee on Commerce, Science and Transportation.89 President Joe Biden has flagged online safety for children as a key topic of concern for his administration, and used the 2022 State of the Union address to request action on these matters.90

Further change to consolidate legislative powers

4.84
As detailed above, the legislative and policy responsibility for matters relating to online safety are divided amongst multiple portfolios with varying functions, resulting in powers being dispersed broadly. Some submitters argued that the OSA should be reviewed in future to ascertain whether it had made an impact in relation to online safety and to examine whether a single regulatory framework would reduce complexity and confusion for providers.
4.85
Digital Industry Group Inc. (DIGI) commented on the disparate nature of legislation pertaining to online safety, suggesting that a single regulatory framework would be less confusing for providers. An example of this trend was provided in relation to abhorrent content, which DIGI stated is covered by multiple schemes in the OSA, but is also captured by the Criminal Code Amendment (Sharing of Abhorrent Violent Material) Act 2019. This, DIGI argued, has led to the emergence of inconsistencies across the legislative instruments:
As one example of an inconsistency, the OSA’s takedown schemes and the BOSE suggest that service providers should be required to remove all types of Class 1 material. However, the Commissioner’s position as stated in their position paper on the OSA Codes is that an identified subclass of Class 1, termed “Class 1b (fetish practices)” can be treated as Class 2 materials, and therefore do not need to be removed. It is unclear whether this interpretation extends to other aspects of the OSA, which creates confusion for industry participants working in good faith to comply with the legislation.91
4.86
DIGI also noted that other jurisdictions, such as the European Union and the UK, were seeking to implement an overarching legislative framework. It argued that a similar framework in Australia would:
aid clarity and compliance – particularly for start-ups, smaller challenger companies, and those without a large local staff presence – who may be struggling to make sense of the complex regulatory environment in Australia.92
4.87
While DIGI acknowledged that the OSA improves this situation, it recommended that the Australian Government adopt a single consolidated legislative framework to ensure clarity and certainty for providers.93
4.88
Many industry members and bodies pointed out that the Australian Government had passed a large number of legislative amendments and conducted a similarly large number of reform consultation processes. For instance, Meta stated that the Australian Government had already heavily regulated the online space in recent years, and it was unclear to what extent these reforms were having an effect:
In the last three years, at least 14 new federal regulations have come into force which primarily impact digital platforms. There have also been at least 18 major Government or parliamentary inquiries or consultations impacting digital platforms over the last three years. These developments are on top of existing regulations that cover digital platforms, including online safety, privacy and multinational taxation laws. The Government has recently foreshadowed additional regulations, including digital platforms-specific competition laws; age or identity verification; and new obligations about working with law enforcement.94
4.89
Meta argued that attention should therefore be focused primarily on whether the recent changes were working effectively, rather than examining whether further ones were required.95 It also noted the risk of ‘overlapping, duplicative or inconsistent rules across different laws’, arguing that this hinders platforms in effectively understanding and implementing the legislation into its systems.96
4.90
The global and interconnected nature of the digital world was also highlighted by Meta, who suggested that an Australian regulatory path should be cognisant of the ‘global contest of competing visions of the internet’.97 Meta argued that other nations watch Australia’s regulatory approach in mind of their own legislative approaches, and that regulators should examine ‘whether Australian regulation sets an example which encourages a liberal, open and democratic approach to the internet, or an internet that is more closed, tightly controlled and fragmented’.98
4.91
Snap Inc. (Snap) were also supportive of a single regulatory framework in the model of the proposed Digital Services Act in the EU. It stated:
Online regulation is most effective when it is based on broad principles that companies of all sizes are able to follow and implement proportionately, as relevant to their service and risk profile. Such regulation focuses on the principles or outcomes companies should deliver, setting out “what” objectives are to be achieved, without being too prescriptive as to “how” companies should achieve them. There is incredible variety in the size, resources and service models of different online platforms. A principles-based approach accommodates this variety and allows for innovative, effective approaches to be developed, while focusing on what is most important: the safety of users.99
4.92
Snap raised concerns that regulation of social media platforms could result in highly detailed and complex requirements, which only the biggest social media companies would be able to comply with due to their capacity to have large compliance teams.100 It argued that this point had been raised by the Australian Competition and Consumer Commission, which had stated in its 2019 Digital Platforms Inquiry that large dominant social media firms stifled competition in the market and had negative impacts on consumers.101 Snap stated that further ‘overly prescription’ regulation on smaller entities in the market would only amplify these power dynamics and the strength of the major social media companies.102
4.93
This point was also noted by the larger social media companies. Twitter argued that further regulatory intervention would ‘undermine competition and entrench incumbent services, reducing consumer choice’:
Policymakers should avoid mandating technical means of implementation that have the effect of further entrenching services based on those tools and technologies, or of benefiting those that have the financial and technical means to deploy the particular implementation proposed, not to mention the vendors promising a simple solution. Opportunities to expand interoperability and the adoption of open standards will empower people with greater choice and flexibility about how they interact with online services and drive competition.103
4.94
Google further noted that the disparate nature of Australia’s regulatory framework created inconsistencies in implementation, which was challenging for the industry to manage:
For example, age verification is proposed within five of these different workstreams with varying timetables for implementation. The lack of a current, clear, evidence-based pathway for delivery makes it more challenging for the industry. A coordinated approach to these issues that is unified under a whole of Government approach would help to ensure consistency and efficiency as we work together to further improve the online well-being of all Australians.104

Voluntary v. mandatory requirements

4.95
A common concern raised by many witnesses was that a number of Australia’s regulatory approaches were based on voluntary Codes of Practice or Codes of Conduct. Dr Michael Salter expressed the view that any form of self-regulation by the digital industry would not improve online harms:
Whenever these scandals are exposed in the press, we hear the same reassuring platitudes from social media executives, and, when pushed, they gradually adjust their operational models to placate public and political outrage, but their responses are consistently reactive and lacking in transparency and accountability … The lessons of the last 20 years suggest that self-regulation, co-regulation and voluntary industry codes of practice are insufficient to keep children safe.105
4.96
This point was echoed by Dr Hany Farid, who stated his view that the technology industry is ‘simply incapable of self-regulation’.106
4.97
Witnesses pointed to examples of industry self- or co-regulation which indicate that this model has been problematic. For example, Reset Australia explained that the voluntary code of conduct in relation to misinformation and disinformation had very few requirements on social media platforms:
If you look at how that's currently operating, there's a voluntary code that's opt in; the transparency reporting doesn't have clear KPIs or metrics; the committee charged with that is meeting on a six-monthly basis; and there are no clear compliance mechanisms or penalties in place.107
4.98
Ms Frances Haugen expressed disbelief at the prospect that social media and digital platforms would comply with a voluntary regulatory framework. She stated:
[I]n every part of the world, what has become clear over the last few years is that self-regulation does not work. Platforms cannot be trusted to act in the public interest. They are often, as my revelations showed, fully aware of the harms caused by their products and services, and yet choose to ignore these in favour of growth and profit.108
4.99
Further, Ms Haugen stated that social media companies were unlikely to comply with any voluntary requirement which could potentially hurt their business model primarily because they are conscious of the power they hold. She explained that companies such as Meta are aware that the only people in the world who genuinely understood its systems were its employees, and it could therefore be selective about explaining how its systems work to the broader community. This, she argued, led to companies intentionally misleading the public in order to provide the appearance of compliance.109
4.100
The ACMA explained that voluntary regulation was designed to ‘encourage and incentivise those digital platforms to take actions themselves’, and that many online companies had taken steps to addressing commitments set out in voluntary codes such as in relation to disinformation and misinformation.110 The voluntary code in relation to disinformation and misinformation was framed as a ‘first instance’ response by government, and that further action may be required.111

Committee comment

4.101
Australia has made significant additions to its online safety regulatory regime in recent years, particularly by the introduction of its world-first adult online abuse and cyberbullying reporting schemes, as well as through the creation of the BOSE.
4.102
In addition, security and law enforcement agencies’ powers to identify, locate and disrupt the activities of online abusers – particularly the sexual exploitation and abuse of children online – have been greatly enhanced. The addition of powers to covertly access and control all kinds of electronic devices and online accounts, and access information held by overseas-based technology and social media companies, represents a substantial improvement to law enforcement’s ability to protect Australians from online harms.
4.103
While the commencement of the OSA early in 2022 has expanded the role played by the eSafety Commissioner and cemented the Commissioner’s role as a key regulator in the online safety space, there is scope to build on that success and expand the scope of the OSA to simplify regulation.
4.104
Given the broad suite of issues that fall under the rubric of online safety, further centralisation of responsibility for online safety policy or enforcement may be challenging, unsuitable and impractical. Nonetheless, the Committee is mindful of the evidence adduced by industry groups and members who pointed to examples of inconsistency and uncertainty.
4.105
The OSA is required to have an independent review in January 2025. While this will enable the effects of the OSA to be seen more fully and in context with the other legislative reforms and policy measures being put in place, there remains areas of the law where attention is urgently required to prevent otherwise avoidable harm.
4.106
Delaying a subsequent review of the framework may hinder further action being taken in relation to matters which currently are not fully covered by legislation, such as technology-facilitated abuse in family and domestic violence contexts, volumetric attacks, and the risks posed by encryption.
4.107
Further, a broad-scale review of digital safety more generally has never been undertaken. This would pose a considerable challenge in terms of scope, conduct and investigations. However, the Committee also believes that the need for industry to fully understand its roles and obligations while providing safety for users is necessary in a world where digital technology is fast becoming the primary form of communication for individuals, groups, business and government.
4.108
A broad review capturing all elements of digital legislative frameworks and needs for an online Australia is necessary to fully comprehend the online framework. This review should be commenced within 18 months of the OSA’s establishment, to provide time to allow the effects of the legislation to be seen while ensuring that urgent action is prioritised.
4.109
Additionally, the Committee also agrees with the views put by many witnesses that the social media and digital industry has for too long been able to self-regulate, with very few results to demonstrate its success. A broad review should cover all self- and co-regulatory frameworks currently in place, in addition to voluntary codes, and assess the adequacy of compliance with these models by social media and digital services. In the event that they are determined to be insufficient, the Committee recommends that future regulatory measures be mandatory.

Recommendation 18

4.110
The Committee recommends that the Department of Infrastructure, Transport, Regional Development and Communications conduct a Digital Safety Review on the legislative framework and regulation in relation to the digital industry. The Digital Safety Review should commence no later than 18 months after the commencement of the Online Safety Act 2021, and provide its findings to Parliament within twelve (12) months.

Recommendation 19

4.111
The Committee recommends that, subject to Recommendation 18, the Digital Review examine the need and possible models for a single regulatory framework under the Online Safety Act, to simplify regulatory arrangements.


 |  Contents  |