Children’s online safety legislation and regulations – a backgrounder
Executive summary
- Australia led the world with online safety regulation with the introduction of the Enhancing Online Safety Act 2015 and the establishment of the eSafety Commissioner in 2015.
- Australia’s online safety regulation is primarily governed by the Online Safety Act 2021 (the Act), which is currently under review. Key features of the Act include a set of Basic Online Safety Expectations for online service providers; complaints and objections systems; and an Online Content Scheme backed by industry codes, a complaints system and removal and remedial notices.
- The eSafety Commissioner is currently working with industry to develop industry codes for the treatment of ‘class 2’ material – including pornography. Alongside this, eSafety is undertaking a pilot of age assurance technology, for online pornography and broader social media platforms.
- Draft and enacted online safety legislation and industry codes from the United Kingdom, Singapore, the European Union, France, the United States of America, Canada and New Zealand provide useful points of comparison to Australia’s online safety system.
- A common feature among many international regulations is the use of age verification to provide modified accounts to children to protect them from harms, including age-inappropriate content, communication with strangers, addictive design features, recommender algorithms, and data collection.
Introduction
This backgrounder provides an overview of the Australian
government’s approach to online safety for children – including discussion of
the Online Safety Act 2021 and eSafety’s proposed roadmap for age
verification – and outlines a sample of children’s online safety measures in
other jurisdictions.
This paper does not cover treatment of child abuse and
exploitation material.
Children’s online safety – Australia
Australia’s eSafety
Commissioner (eSafety) is the ‘world’s first government agency dedicated to
keeping people safer online.’[1]
The work of eSafety is primarily governed by the Online Safety
Act 2021 (the Act). While the Act’s aim is to improve and promote the
online safety of all Australians, the safety of children is a key
focus.[2]
Since eSafety was established in 2015, numerous other countries have legislated
their own approaches to safeguarding children online.
History of the Act
The Online Safety
Act 2021 came into effect on 23 January 2022, replacing the Enhancing
Online Safety Act 2015. The Act was born out of a recommendation of the
2018 ‘Briggs Report’, the Report
of the statutory review of the Enhancing Online Safety Act 2015 and the review
of schedules 5 and 7 to the Broadcasting Services Act 1992 (Online Content
Scheme). The Briggs Report recommended that ‘existing out-of-date and
inconsistent legislation should be replaced by a new Online Safety Act and a
new single code of industry practice’.[3]
In 2019, the Coalition made an election commitment to
introduce a new Online Safety Act, in response to the Briggs Report.[4] Consultation and
drafting of the new legislation began on 11 December 2019, with the Online
Safety Bill 2021 introduced into Parliament on 24 February 2021.[5] The Bill passed
both houses on 23 June 2021 and the Act came into effect on 23 January 2022.
In February 2024, the government announced a review of the
effectiveness of the Act.[6]
The review is obligatory under section 239A of the Act. The Terms of Reference
for the review outline a number of items that will be considered in the Review,
including:
5. Whether the regulatory
arrangements, tools and powers available to the Commissioner should be amended
and/or simplified, including through consideration of:
a. the introduction of a duty of care
requirement towards users (similar to the United Kingdom’s Online Safety Act
2023 or the primary duty of care under Australia’s work health and safety
legislation) and how this may interact with existing elements of the Act
b. ensuring industry acts in the
best interests of the child.[7]
How the Act currently protects children
Basic Online Safety Expectations
A core part of the Act is the provision of ‘Basic Online
Safety Expectations’ (BOSE).[8]
A factsheet published by eSafety summarises that:
These expectations are designed to
help make sure online services are safer for all Australians to use. They also
encourage the tech industry to be more transparent about their safety features,
policies and practices.
The Basic Online Safety Expectations
are a broad set of requirements that apply to an array of services and all
online safety issues. They establish a new benchmark for online service
providers to be proactive in how they protect people from abusive conduct and
harmful content online.
eSafety now expects online service
providers to take reasonable steps to be safe for their users. We expect them
to minimise bullying, abuse and other harmful activity and content. We expect
them to have clear and easy-to-follow ways for people to lodge complaints about
unacceptable use.[9]
Part 4 of the Act outlines that the Minister may, by
legislative instrument, determine basic online safety requirements for online
service providers. Core expectations include but are not limited to expectations
that the provider of a service will take reasonable steps to prevent children
accessing inappropriate content, ensure end-users are able to use the service
in a safe manner and ensure that the service has a robust reporting and
complaints system.[10]
The current expectations are outlined in the Online Safety
(Basic Online Safety Expectations) Determination 2022, updated on 31 May
2024, following a review.[11]
New expectations include that ‘the best interests of the child are a primary
consideration in the design and operation of any service that is likely to be
accessed by children’.[12]
Critically, the Act states that these requirements are not
enforceable in court.[13]
However, the Act does provide that eSafety can require service providers to
report on their compliance with the expectations in various ways. Failure to
report is an offence.[14]
According to the eSafety website, this reporting requirement is ‘designed to
improve providers’ safety standards and improve transparency and accountability’.[15]
Complaints and objection systems
The Act provides for the creation of a complaints system for
cyber‑bullying
material targeted at an Australian child, as well as complaints and objection systems
for non-consensual sharing of intimate images, for cyber-abuse material
targeted at an adult, and relating to the Online Content Scheme (discussed below).[16] The system for
cyber-bullying material targeted at an Australian child allows children, and
responsible persons on behalf of a child, to lodge a complaint through an
online form on the eSafety website regarding ‘online communication to or about
an Australian child that is seriously threatening, seriously intimidating,
seriously harassing or seriously humiliating’.[17]
As summarised in the Simplified Outline of the Act:
The complaints system for cyber‑bullying
material targeted at an Australian child includes the following components:
(a) the provider
of a social media service, a relevant electronic service or a
designated internet service may be given a notice (a removal notice)
requiring the removal from the service of cyber‑bullying material
targeted at an Australian child;
(b) a hosting service
provider who hosts cyber‑bullying material targeted at an Australian
child may be given a notice (a removal notice) requiring the
provider to cease hosting the material;
(c) a person who posts
cyber‑bullying material targeted at an Australian child may be given a
notice (an end‑user notice) requiring the person
to remove the material, refrain from posting cyber‑bullying material or
apologise for posting the material.[18]
Section 12 of the Act defines ‘removed’ as when ‘the
material is neither accessible to, nor delivered to, any of the end-users in
Australia’ using the regulated service.[19]
Failure to comply with the above notices may lead to a number of possible
enforcement actions including a court injunction, and a civil penalty notice.[20]
Online Content Scheme
The Act includes an Online Content Scheme that regulates illegal
and restricted online content, which eSafety describes as:
online
content that ranges from the most seriously harmful material, such as images
and videos showing the sexual abuse of children or acts of terrorism, through
to content which should not be accessed by children, such as simulated sexual
activity, detailed nudity or high impact violence.[21]
The Act provides that eSafety ‘can direct an online service
or platform to remove illegal content or ensure that restricted content can
only be accessed by people who are 18 or older’.[22] Illegal and
restricted online content is classified as either class 1 or class 2 material,
with class 1 material being ‘material that is
or would likely be refused classification under the
National Classification Scheme’ and class 2 material being ‘material that is,
or would likely be, classified as either’:
- X18+ (or, in the case of publications, category 2 restricted),
or
- R18+ (or, in the case of publications, category 1
restricted) under the National Classification Scheme, because it is
considered inappropriate for general public access and/or for children and
young people under 18 years old.[23]
eSafety has proposed simplified subcategories for class 1
and class 2 materials, which it argues are based on, and consistent with, the
National Classification Code and film classification guidelines.[24] These
subcategories have been devised for the purpose of industry codes, recognising
that within class 1 and class 2 material, ‘some content is more harmful than
other content, and industry participants may handle this material in different
ways’.[25]
Industry codes and standards
Division 7 of Part 9 of the Act provides that industry should
develop industry codes and standards that include procedures for dealing with
class 1 and class 2 content. Codes must be registered with eSafety, and once
registered industry is required to comply.[26]
eSafety may refuse to register codes if they do not meet the statutory
requirements, in which case eSafety can develop an industry standard for that
section of the online industry, which must be complied with.[27] Repeated
non-compliance may lead to a Federal Court order for a service provider to stop
providing that service in Australia.[28]
To date, 6 industry codes addressing class 1A and class 1B
material have been registered and have come into effect, with a further 2 set
to take effect on 22 December 2024.[29]
Development of codes to address class 2 material – which is considered
inappropriate for children – formally commenced on 1 July 2024, with final
draft codes expected on 19 December 2024.[30]
Complaints system and removal notices
The Online Content Scheme is strengthened by a complaints
system.[31]
Subsection 38(1) provides that a person may make a complaint to eSafety if they
believe class 1 material or class 2A material that shows actual sexual
intercourse or sexual activity between consenting adults, i.e. pornography, is
accessible online via a regulated service.[32]
Subsection 38(2) provides that a person may also make a complaint to eSafety if
class 2 material that has been or is likely to be classified R18+ or Category 1
Restricted under the National Classification Code (class 2B) is accessible and
not subject to a restricted access system (discussed below).[33] In response to a
complaint, eSafety may:
- issue
a removal notice requiring a service provider or hosting service provider to
remove class 1 material or class 2A material that shows actual sexual activity
between consenting adults or
- issue
a remedial notice requiring a service provider or hosting service
provider to either:
- remove
content that is likely to be classified R18+ or Category 1 Restricted (class 2B
material) or
- to
ensure that access to the material is subject to a restricted access system.[34]
The provisions related to class 1 material apply to material
that can be accessed by end-users in Australia, regardless of from where
they are provided or hosted.[35]
However, the provisions related to class 2 material only apply to services that
are provided or hosted from Australia.[36]
As outlined in the Explanatory Memorandum for the Act:
This is intended to capture, for
example, social media services who have a registered office or carry on
business in Australia or a website based in Australia. It is not intended to
capture services based overseas that provide X 18+ material. This type of
content is instead intended to be dealt with through industry codes and
standards.[37]
Civil penalties apply for non-compliance with removal
notices and remedial notices. Continued non-compliance with a removal notice
may lead to a Federal Court order for a service provider to stop providing that
service in Australia.[38]
In summary, these provisions provide that pornographic
material that is hosted in or provided from Australia may be subject to a removal
notice and necessitate that material that is classified as inappropriate for
children and is hosted in or provided from Australia must be subject to a restricted
access system.
Restricted access systems
The requirements for restricted access systems are specified
in the Online
Safety (Restricted Access Systems) Declaration 2022, as provided for in
Section 108 of the Act.[39]
The Declaration includes requirements that the access-control system:
- requires
an application for access to relevant class 2 material
- gives
warnings and safety information for relevant class 2 material
- incorporates
reasonable steps to confirm the age of applicants and
- limits
access to relevant class 2 material.[40]
Importantly, this includes a requirement that ‘the access‑control
system must incorporate reasonable steps to confirm that an applicant is at
least 18 years of age’.[41]
Children’s online exposure to pornography and an
age verification roadmap
Safeguarding children from accessing harmful and
inappropriate material online, specifically pornography, has been an area of
longstanding government and parliamentary focus.
In 2015–2016 the Senate Standing Committee on Environment
and Communications held an inquiry into the ‘Harm
being done to Australian children through access to pornography on the Internet’.
The Government supported the committee’s recommendations to commission further
dedicated research into young people’s exposure to pornography online.[42] In late 2017, the
Australian Institute of Family Studies (AIFS) released the ‘Effects
of Pornography on Children and Young People’
report, which synthesised the findings of research into the impacts of exposure
to pornography on young people. It highlighted that research shows young
peoples’ consumption of pornography online influences expectations of sex;
shapes sexual practices, including sexual health safety; reinforces gender
stereotypes; and positively correlates to an increase in sexual aggression.[43]
In 2018, eSafety published a report on ‘Parenting
and pornography: findings from Australia, New Zealand and the United Kingdom’.
While the findings from Australia included that only 24% of surveyed parents
thought their children had been exposed to online pornography, 40% of those
parents believed their children had come across the content accidentally, while
a further 8% responded that they were sent material by a stranger.[44]
Following this, in 2019, the House of Representatives
Standing Committee on Social Policy and Legal Affairs launched an ‘Inquiry
into age verification for online wagering and online pornography’. The
inquiry made 6 recommendations, including that ‘the Australian Government
direct and adequately resource the eSafety Commissioner to expeditiously
develop and publish a roadmap for the implementation of a regime of mandatory
age verification for online pornographic material’.[45] This
recommendation was supported by government.[46]
In March 2023, eSafety completed this work and submitted
an age verification background report and age verification roadmap to the
government for consideration.[47]
The roadmap’s key recommendation is to ‘develop, implement, and evaluate a
pilot before seeking to prescribe and mandate age assurance technologies for
access to online pornography.’[48]
As outlined in the roadmap, eSafety’s research found that 75% of
16–18-year-olds surveyed had seen online pornography.[49] In light of these
findings, and the harms of children’s exposure to pornography, as referenced
above, eSafety suggested that:
If a service allows pornography, it
should apply settings to prevent it from being accessed by and recommended to
children. Among other things, this requires robust age assurance measures at
sign-up to ensure the service knows the age of its users. If a service does not
allow pornography, this rule needs to be enforced through effective reporting
mechanisms and proactive content detection and moderation tools, developed and
deployed in consultation with the user community.[50]
eSafety also outlined that:
While the roadmap focuses on
children’s access to online pornography, the ability of online service
providers to ascertain the age of their users is essential to keeping children
safe from a wider spectrum of risks and harms beyond pornography.[51]
The roadmap approaches the roll out of age-assurance
technology cautiously, noting that any enforcement is contingent on appropriate
technology.[52]
The independent assessment of available age assurance technologies, contained
within the report, found the market to be ‘immature but developing’ – and
that interest in age-assurance technologies by governments worldwide is
spurring the development of robust age-assurance processes by online companies.[53] In light of this,
the report recommended that ‘age assurance technologies should be trialled in
Australia, based on lessons from pilots conducted elsewhere, before being
mandated’.[54]
Further, it suggests that while the technology will likely reduce children’s
access to pornography, ‘age assurance on its own will not address this issue’.[55]
The Government issued its response
to the roadmap on 30 August 2023.[56]
The response noted the roadmap’s recommendation regarding a pilot of age
assurance technologies but decided to ‘await the outcomes of the class 2
industry codes process before deciding on a potential trial of age assurance
technologies’.[57]
This approach was reviewed, however, following a National Cabinet meeting on 1
May 2024 focussing on measures to end violence against women. The Government announced
that it would ‘provide resourcing to conduct a pilot of age assurance
technology to protect children from harmful content, like pornography and other
age-restricted online services’ and outlined that:
The new pilot… is part of a suite of
interventions aimed at curbing easy access to damaging material by children and
young people, and tackling extreme misogyny online.
The pilot will identify available age
assurance products to protect children from online harm, and test their
efficacy, including in relation to privacy and security.[58]
This measure was funded in the May 2024 Budget.[59]
A key difference between
the potential inclusion of age verification mandates within class 2 industry
codes and the current provisions related to accessing class 2 content, is the
scope of application. Under the Online Safety Act, eSafety may only
issue removal notices and remedial notices and mandate the use of restricted
access systems for class 2 content if it is provided from or hosted in
Australia.[60] However, if a class 2 industry
code were to mandate the use of an age assurance system, this would apply to all
‘online services so far as those are provided to end users in Australia’.[61]
Other measures and considerations
The government’s response to the 2023 report of a two-year
review of the Privacy Act 1988 agreed-in-principle to a ‘suite of
proposed additional protections’ which would apply to all children.[62] This includes
agreeing to the development of a Children’s Online Privacy code.[63]
Further, the harm of recommender algorithms in social media
feeds has been a recent topic of interest.[64]
In December 2022, eSafety published Position
Statement: Recommender systems and algorithms which outlines a range of risks posed by recommender
systems and potential mitigating actions.[65]
Next steps
As indicated above, the next
regulatory step in the protection of children from online harms is the
development of industry codes for class 2 content, which began on 1 July 2024.[66]
The development and registration of these codes will create a responsibility
for online service providers to control access to class 2 material such as
online pornography that is considered harmful to children.
As noted, the Government has also funded a pilot of age
assurance technology, which will underpin the
enforcement of these codes, and run in parallel to the codes’ development.[67] Statements
by government have confirmed that the trial of age-assurance technologies will
not only relate to accessing ‘age inappropriate material’ and age-restricted
services, but also to social media services.[68] To complement the technology
trial, the Department of Infrastructure, Transport, Regional Development,
Communications and the Arts will undertake research into potential age-limits
to be imposed on social media generally, restricting access to child users.[69]
Any regulatory developments in this area sit outside the current development of
class 2 industry codes.[70]
The current review
into the Online Safety Act also provides an opportunity to consider the
effectiveness of the current Act, consider any necessary reform, and evaluate
Australia’s approach against other jurisdictions.
Children’s online safety – International approaches
This section outlines some examples of online safety
legislation in international jurisdictions, with a specific focus on measures
related to child users’ safety online. Examples are provided for the Five Eyes
countries – as countries comparable to Australia – as well as other
jurisdictions that have taken notable, recent legislative action in the online
safety space. This overview is representative, not exhaustive. Given the
constitutional, political, legislative, and structural differences between countries,
it is not always possible to draw clear comparisons between Australian and
overseas legislation.
Five Eyes countries
United Kingdom
In the UK, a draft Online Safety Bill was first published in
May 2021, with various changes made to the Bill before its introduction to
Parliament in March 2022.[71]
The Online
Safety Act 2023 (the UK Act) received Royal Assent on 26 October 2023.[72]
As outlined in the UK Act’s introduction,
the ‘Act provides for a new regulatory framework which has the general purpose
of making the use of internet services regulated by this Act safer for
individuals in the United Kingdom.’ Towards this purpose, the UK Act imposes
duties on providers of regulated services to identify, mitigate and manage the
risks of harm from content and activity that is harmful to children. The duties
seek to ensure that services are ‘designed and operated in such a way that a
higher standard of protection is provided for children than for adults’.[73]
Duties of Care
Part 3 of the UK Act imposes various duties of care on
providers of regulated services. In summary, these duties focus on mitigating
and managing the risks arising from services hosting harmful or illegal content,
or facilitating harm or illegal activity. Services must have robust content
reporting and complaints systems and must have regard to users’ freedom of
expression and privacy.
Additional duties are imposed on services that are likely to
be accessed by children.[74]
These provide that relevant services must carry out, and keep up to date,
suitable and sufficient children’s risk assessments, and that they must take
proportionate measures to reduce the risk of children accessing harmful content.
There is also a requirement for user-to-user providers (i.e. social media
services) to use age verification or age estimation to prevent children from
encountering primary priority content that is harmful to children. Content deemed
to be harmful to children for the purpose of the Act includes pornographic
content as well as content that encourages, promotes, or provides instructions
for suicide, self-harm, or an eating disorder.[75]
Other duties
Part 5 imposes duties on providers of pornographic content.
This includes a duty to ensure, by the use of age verification or age
estimation (or both), that children are not normally able to encounter content
that is regulated provider pornographic content in relation to the service.[76]
Compliance with duties
The UK Act provides that OFCOM (the UK’s communications
regulator) must prepare and issue a code of practice for service providers recommending
measures for the purpose of compliance with their various duties. Comprehensive
Draft
Children’s Safety Codes for search engines and user-to-user services were
published on 8 May 2024.[77]
OFCOM is also responsible for enforcement of the UK Act.[78]
Section 131 provides a list of duties that are enforceable
requirements, which appears to cover all duties outlined above in relation
to children’s online safety and access to pornographic content. OFCOM may give
a provisional notice of contravention (effectively a warning) to the provider
of a regulated service if there are reasonable grounds to believe that they
have failed to comply with an enforceable requirement.[79] A provisional
notice of contravention may lead to a confirmation decision – an order to
require a person to take steps to comply with a notified requirement or an
order to pay a penalty.[80]
OFCOM may proceed in issuing a confirmation decision (without a provisional
notice) if it is satisfied that a provider has failed to comply with a risk
assessment duty or duty regarding children’s access requirements.[81]
Failure to comply with requirements imposed by OFCOM in a
confirmation decision may lead to imprisonment or a fine, or both.[82] OFCOM may also
restrict the service of non-compliant providers.[83]
Canada
The proposed Bill C-63 (Online
Harms Act) was introduced by the Canadian Government on 26 February 2024 and is
currently before parliament.[84]
The bill aims to create ‘a baseline standard for online platforms to keep
Canadians safe’, with a special focus on children.[85] Alongside
standards for online platforms, the bill proposes a new ecosystem of regulatory
infrastructure through the establishment of the Digital Safety Commission of
Canada, the Digital Safety Ombudsperson of Canada, and the Digital Safety
Office of Canada.[86]
The Bill specifically targets 7 types of harmful content
including content used to bully a child and content that induces a child to
harm themselves.[87]
It also subjects social media services to 4 duties – to act responsibly,
including by mitigating the risk of users being exposed to harmful content; to
protect children; to make certain sexually exploitative or intimate content
inaccessible, and to keep records.[88]
The Duty to Protect Children includes that the design of regulated platforms
must respect the protection of children.[89]
New Zealand
The New Zealand Government recently ceased work aimed at
modernising the country’s online safety regulation, with the decision to not
progress work developed through a 3-year review.[90] The Safer Online
Services and Media Platforms (SOSMP) review ran from June 2021 to May 2024,
with the objective ‘to improve the regulation of online services and media
platforms to boost consumer safety for all New Zealanders, with a particular
focus on minimising content harms for children and young people’.[91]
The review aimed to streamline and address holes in the
current regulatory system, which is comprised of the Films, Videos, and
Publications Classification Act 1993, the Broadcasting Act 1989, and
‘voluntary self-regulation by operators’, and does not comprehensively cover
online content.[92]
A discussion paper published by the government in June 2023 proposed an
industry regulation model overseen by a new regulator, to operate at arm’s
length from government, with the view to ‘cover all platforms, regardless of
format or type’, with a focus on larger platforms.[93] The paper called for
submissions from stakeholders in response to specific policy proposals, and
indicated that a draft Bill was to be expected in 2024 at the earliest.[94]
A summary report released in April 2024 synthesised
submissions made to the review – including general support from social media
platforms and from organisations.[95]
However, 18,978 of the approximate 20,000 submissions received were template
submissions from the Free Speech Union and Voices for Freedom, which were
strongly negative and concerned that ‘the proposals would result in the
narrowing of people’s right to freely express themselves’.[96] In response, the
Minister for Internal Affairs, Brooke van Velden, announced on Facebook the
government’s decision to not progress with these reforms, citing the strong
opposition to the reforms and the ‘principle of free speech’.[97]
In lieu of legislation,
social media platforms have signed up to the voluntary Aotearoa
New Zealand code of practice for online safety and harms, which was developed by Netsafe – an independent online
safety charity – in consultation with stakeholders and launched in 2022.[98]
Current signatories include Meta, Google, TikTok, Twitch and X.[99]
United States of America
There has been much action in the children’s online
safety realm in the United States, with various laws proposed and enacted at
the state level, and relevant legislation introduced at the Federal level. These various laws build on the foundation
of the federal Children’s
Online Privacy Protection Rule (COPPA), issued in 1999, which protects the
privacy of children online and has effectively limited the use of most social
media services for children under 13 without parental consent.[100]
A common feature of much of the legislation at state
level is the mandatory use of age verification – for social media platforms and
for sites hosting pornography. [101] Analysis from June 2024 notes
that Arkansas, Connecticut, Louisiana, Ohio, and Utah have passed laws ‘requiring
social media platforms to verify that users are over either age 16 or 18 and
require parental consent from users under that age limit’, while Arkansas,
Louisiana, Mississippi, Montana, North Carolina, Texas, Utah, and Virginia have
all passed laws ‘requiring online services with a certain amount of adult
content to verify that users are over 18 or risk fines’.[102]
Representative laws from Utah – one of the first
states to legislate in the area – are detailed below as an example of state
legislation.[103] Orrick’s ‘Online Safety Resource Center’
provides an overview of the rapidly developing state laws across the country.
Children’s Online Privacy Protection Rule
The Children's
Online Privacy Protection Rule (COPPA), issued in 1999 and effective since
21 April 2000, requires that operators of certain websites and online services
must, among other things, ‘obtain verifiable parental consent prior to any
collection, use, and/or disclosure of personal information from children’.[104]
A child is defined as an individual aged under 13, and personal information
includes a child’s first and last name and IP
address.[105]
In order to comply with the
rule, many platforms – such as Facebook, Instagram and Discord – have set terms
of service that restrict accounts to those over 13.[106]
Others – such as YouTube and TikTok – provide modified accounts for users under
13, with parental consent.[107] However, COPPA’s restrictions
also apply in instances where account creation is not needed to access a
platform’s content. In 2019, the US Federal Trade Commission fined YouTube
USD$136 million for violating COPPA. The platform was found to knowingly collect
data through cookies from viewers of child-directed channels without first
obtaining parental consent.[108]
Kids Online Safety Act
At the federal level, a
proposed Kids Online Safety Act has been introduced in both the Senate
and the House
of Representatives with bipartisan support.[109]
While the two bills do differ in some respects, they share common core features
including that ‘covered platforms’ – including online platforms, messaging
applications, online video games, and video streaming services – be designed in
a way to prevent harm to minors; that minors be provided with modified accounts
with protections regarding recommender systems, addictive design features and
communications with strangers; that parental tools be available for minor’s
accounts; reporting requirements; and the commissioning of a study into age
verification technology. Minors are defined as those under the age of 17.
Reporting suggests that the Senate bill may have the required support to pass.[110]
Utah Minor Protection in Social Media Act
Utah
Senate Bill 194 (Social Media Regulations Amendments) enacts the Utah
Minor Protection in Social Media Act (the Utah Act), which will be
effective from 1 October 2024. Under the Utah Act, social media platforms must
implement an age assurance system to determine whether a current or prospective
Utah account holder is under the age of 18.[111]
Platforms must provide modified accounts for minors, with modifications
regarding privacy, data collection, ability to message strangers, and addictive
design features.[112]
The Utah Act also mandates that social media companies offer
supervisory tools for minors’ accounts. These tools must include capabilities
for an individual selected by the minor to set time limits on account usage,
and to view data and settings related to the minor’s account. Minors must not
be able to change their default privacy settings without first obtaining
verifiable parental consent.[113]
Utah
House Bill 464 (Social Media Amendments) addresses potential harms to
minors caused by social media platforms.
Section 1 (effective 1 May 2024) amends the offence of
electronic communication harassment to include the electronic publishing of
personal identifying information of a minor by a non-relative, if they are
aware that such an action will result in a substantial risk that the minor will
be the victim of an offense against the individual as outlined in the Utah
Criminal Code.[114]
Section 2 (effective 1 October 2024) acknowledges potential harms of excessive
social media use by minors — including by providing that ‘a Utah minor account
holder or a Utah minor account holder's parent may bring a cause of action
against a social media company in court for an adverse mental health outcome
arising, in whole or in part, from the minor's excessive use of the social
media company's algorithmically curated social media service’.[115]
Together, Utah Senate Bill 194 and Utah House Bill 464
repealed and replaced Utah Senate Bill 152
and Utah House
Bill 311, which were passed in 2023 to together form the Utah Social Media
Regulation Act (2023 Utah Act).[116]
At the time, the Utah Social Media Regulation Act was regarded as the first of
its kind in the United States.[117]
However, the 2023 Utah Act was the subject of a complaint by
a trade association for internet companies, which argued that it was
unconstitutional.[118]
Notable provisions of the 2023 Utah Act have been removed from the 2024 Bills,
including those requiring that:
- social
media platforms may not permit a Utah minor to be an account holder unless the
minor has the express consent of a parent or guardian[119]
- minors’
accounts may not display advertising[120]
- platforms
must prohibit minors from signing in between the hours of 10:30pm – 6:30am,
unless a parent authorises access[121]
- platforms
must not use a design or feature that the company should know causes a minor to
have an addiction to the platform.[122]
Despite these changes, new complaints have been raised about
the 2024 Bills.[123]
The complaints again argue that the bills violate the constitution, and that
they block the flow of information and the exercising of free speech.
Other jurisdictions
European Union
Regulation
(EU) 2022/2065 of the European Parliament and of the Council of 19 October
2022 on a Single Market For Digital Services and amending
Directive 2000/31/EC (Digital Services Act) (the Digital Services Act)
was enacted on 16 November 2022. The Digital Services Act aims to ‘contribute
to the proper functioning of the internal market for intermediary services by
setting out harmonised rules for a safe, predictable and trusted online
environment that facilitates innovation and in which fundamental rights
enshrined in the Charter, including the principle of consumer protection, are
effectively protected’.[124]
The Digital Services Act predominantly applies to providers
of ‘very large online platforms’ (VLOPs) and ‘very large online search engines’
(VLOSEs) that are designated by the European Commission. The first large
platforms designated by the European Commission were required to meet
requirements by August 2023 and all other platforms from February 2024.[125]Designated
platforms include YouTube, LinkedIn, Facebook, Instagram, Pinterest, Snapchat,
TikTok, and X, as well as Wikipedia, and several pornography sites, search
engines, retail and app stores, and travel booking sites.[126]
Online protection of minors
Article
28 of the Digital Services Act states that:
providers of online platforms
accessible to minors shall put in place appropriate and proportionate measures
to ensure a high level of privacy, safety, and security of minors, on their
service.
The article also restricts advertising based on the
profiling of users known to be minors.
Providers of VLOPs and VLOSEs are required to annually
identify and assess potential risks caused by their service, including those
that may negatively affect the protection of minors.[127] Platforms must
put in place measures to mitigate these risks, including ‘taking targeted
measures to protect the rights of the child, including age verification and
parental control tools, tools aimed at helping minors signal abuse or
obtain support, as appropriate’.[128]
Article 44 (1)(j) provides that the Commission ‘shall support and promote the
development and implementation of voluntary standards’ for targeted measures to
protect minors online.
The Digital Services Act also provides that services must be
designed to be understood by children. This includes requiring that the terms
and conditions of services primarily directed at or predominantly used by
minors be written in a way that minors can understand, and that complaints
mechanisms be organised in a way that is child friendly.[129]
Compliance and penalties
Providers of VLOPs and VLOSEs have various responsibilities
under the Digital Services Act, including reporting compliance. The reporting
obligations include undertaking an annual independent audit, providing the
enforcement authority with access to data necessary to monitor compliance, the
establishment of an in-house ‘compliance function’, and transparency reporting
obligations.[130]
The EU and Member States are responsible for ensuring that
online providers comply with the Digital Services Act.[131] EU Member States
are required to set rules for penalties applicable to infringements under the
DSA. The maximum fine that may be imposed for failure to comply with an
obligation under the DSA is 6% of the annual global turnover of the provider of
intermediary services. The maximum periodic penalty payment is 5% of the
average daily global turnover of the provider of intermediary services per day.[132] The
European Commission can also impose fines where a provider of a VLOP or
VLOSE intentionally or negligently infringes the DSA, or fails to comply with a
direction or binding commitment made under the DSA.[133] Fines cannot
exceed 6% of the provider’s annual global turnover in the preceding financial
year.[134]
France
French Law
no. 2023-566 of July 7th 2023 implementing a digital age of majority and
preventing online hate (the French Law) was enacted on 7 July 2023. The
French Law amends Law No. 2004-575 of June 21, 2004, on Confidence in the
Digital Economy ‘and implements a digital age of majority and reinforces the
obligations of social networks, especially regarding registration of minors’.[135]
Digital age of majority
Article 4 of the French Law stipulates that social media
services operating in France shall refuse the registration of people under the
age of 15, unless authorised by a parent. This applies to new and existing
accounts. If authority is granted and an account is registered for a minor, the
platform must provide information to minors and their parent on the risks
associated with digital use and the means of prevention, and clear and
appropriate information on the conditions of use of their data and their rights
relating to data processing, files and freedoms. The platform must also activate
a device to monitor the user’s time spent on their service and regularly notify
the user of that duration. A parent may request that a social media service
suspend the account of a minor.
A reference framework will be provided by the Regulatory
Authority for Audiovisual and Digital Communication outlining the ways in which
social media services may verify the age of users and parental authorisation.
The French Law also introduces obligations for social media
services to prevent online harassment by making anti-harassment messages
visible to users, directing users to support structures, and enabling all users
to report illicit content.[136]
Penalties
Article 4 of the French Law outlines penalties for
noncompliance. Failure to deliver a technical solution to verify the age of
end-users and parental authorisation will result in a formal compliance notice
being issued by the Audiovisual and Digital Communication Regulatory Authority.
The provider has 15 days to respond. In the event of non-compliance, the
Authority can refer the matter to the Paris Judicial Court for a court order.
Failure to comply with the court order is punishable by a maximum fine of 1% of
the provider’s worldwide turnover for the preceding financial year.
Singapore
The Online
Safety (Miscellaneous Amendments) Act (the Singapore Act) entered into
force on 1 February 2023.[137]
Code of Practice for Online Safety
Among other things, the Singapore Act empowers the
Singaporean Info-communications Media Development Authority (IMDA) to designate
social media services (SMSs) with significant reach or impact in Singapore to
comply with online codes of practice. IMDA has issued a Code of Practice for
Online Safety, which came into effect from 18 July 2023. Designated SMSs
required to comply with the Code, are Facebook, HardwareZone, Instagram,
TikTok, Twitter, and YouTube.[138]
Subsection Aii of the Code
of Practice for Online Safety outlines specific measures that designated
SMSs must follow for the protection of children.
The code acknowledges that specific content is harmful to
children and provides that services must develop, publish and follow community
guidelines and standards which minimally address sexual content, violent
content, suicide and self-harm content and cyberbullying content.[139] If children do
search for high-risk content, the service must actively offer relevant safety
information.[140]
Further, children must not be targeted to receive content – including
advertising and recommendations – that the services is ‘reasonably aware to be
detrimental to their physical or mental well-being’. [141]
The code also provides that children or their guardians must
be provided with tools and information to help manage their safety online.[142] Significantly,
child users must also be provided with differentiated accounts whereby:
the settings for the tools to
minimise exposure and mitigate impact of harmful and/or inappropriate content
and unwanted interactions are robust and set to more restrictive levels that
are age appropriate by default.[143]
Penalties
Where an online communication service (which includes SMSs)
fails to satisfy its duty to take all reasonably practicable steps to comply
with the Code of Practice, IMDA may impose a financial penalty of any amount it
sees fit, up to S$1 million, or may direct the provider to take certain steps
within a specified time.[144]
Failure to comply with such a direction is an offence subject to a possible
fine of up to S$1 million (and a further fine not exceeding S$100,000 for every
day or part-day which the offence continues after conviction).[145]
Conclusion
Australia led the world with online safety regulation with
the introduction of the Enhancing Online Safety Act 2015 and the
establishment of the eSafety Commissioner in 2015. Since then, many other countries
have followed suit, introducing various laws aimed at protecting children
online, specifically on social media platforms. A common feature among many
international regulations is the use of age verification to provide modified
accounts to children to protect them from harms, including age-inappropriate content,
communication with strangers, addictive design features, recommender
algorithms, and data collection.