3. I'm Concerned About This Post

Technological Features that Facilitate Online Harm

Overview

3.1
The question of the role of technology itself in causing harm is divisive amongst industry participants and users. One view argues that technology is by its nature neutral and only a conduit for those who use it to harm others. An alternative view suggests that certain features of digital platforms can further amplify harmful content that is caused to the point that they become a source of harm themselves.
3.2
Social media and other digital companies utilise a number of technological tools and devices in order to provide users with their online experience. In recent years, many of these have been identified as contributing to online harm and causing it to proliferate. Major technology companies working in online spaces have responded to concerns regarding online safety in varying ways, and with varying degrees of success.
3.3
Social media and digital companies more broadly have responded to concerns regarding online safety on their platforms in a variety of different ways, depending on the nature and functioning of their service. Despite this, there is widespread criticism that these measures are not sufficient to adequately address harm.
3.4
This chapter examines how social media and digital companies have responded to the challenges posed by online safety, and the existing technological tools utilised by these entities that pose concerns for those involved with online safety matters.

Overview of social media companies’ online safety responses

3.5
This section provides an overview of measures that major social media and other key digital platforms have taken in response to online safety concerns.

Meta (including Facebook, Messenger, Instagram and WhatsApp)

3.6
Meta Platforms (Meta) is the parent company of multiple social media platforms such as Facebook, Instagram and WhatsApp. It was established in 2004 and is considered one of the Big Five American technology firms.
3.7
Meta’s general approach to safety in relation to its social media products hinges on its policies, known as the Community Standards, which outline what is and is not permitted on the platform’s services. Meta stated that these Community Standards are based on a set of values prioritising online safety and combating abuse, alongside values of privacy, authenticity, voice and dignity.1 The Community Standards prohibit certain content themes, such as hate speech, child exploitation, suicide and/or self-injury, violent or abhorrent content, and bullying and harassment.2 Updates are made regularly to the Community Standards based on current events online and offline.3
3.8
Some of the features that Meta have introduced to address online safety include:
Basic tools for users to manage their experiences of the platforms, including Block, Report, Hide, Restrict and Unfollow functions, in addition to other user-control tools such as managing comments to delete and restrict unwanted interactions, tagging controls, and controls over who can send direct or private messages to them;4
Issuing warnings or discouragement to users if they draft a comment or message which resembles a bullying or harassment comment;5
Establishing an Oversight Board, populated by 40 experts in human rights and technology, to make binding rulings on ‘difficult and significant decisions about content’ in relation to Facebook and Instagram content;6
Guidance towards authoritative sources of information when users search for particular topics, such as ‘domestic violence’, or blurring triggering images if a user searches for self-harm or eating disorders;7 and
Resources and learning modules focusing on online safety and the tools Meta offers to manage users’ experiences.8
3.9
Meta also has a Safety Advisory Board for its global operations, which consists of a number of organisations and individuals who are experts in online safety and which provides advice to inform Meta’s safety policies.9
3.10
In addressing concerns regarding young people using its platforms, Meta has worked with the eSafety Commissioner in addition to other international governments and industry partners to create a Youth Design Guide, which suggests the following principles be incorporated into any Meta product aimed at young people: ‘(1) designing for different levels of maturity; (2) empowering young people with meaningful transparency and control; and (3) undertaking data education for young people’.10
3.11
Meta has also acknowledged and taken action on other vulnerable user groups. For example, having identified women as a vulnerable user group on its services, Meta has implemented the following policies and features:
Ensuring gendered and culturally specific forms of harm are catered for in policies regarding prohibited content (e.g. expanding the bullying and harassment policy to incorporate stricter regulation of female-gendered cursing);
Investing and leading in combatting non-consensual sharing of images (including leading StopNCII.org, which enables people concerned that their image has been shared to work with online platforms to stop the proliferation of the content); and
Working with groups such as the Women’s Services Network (WESNET) and 1800 RESPECT to promote messaging regarding family violence and link users to these services.11

Google (including Search and YouTube)

3.12
Google is an American technology company which produces services and products primarily in relation to the online market. Founded in 1998, the company operates the search engine Google Search, in addition to other Google products, and also administers YouTube.
3.13
Digital companies sitting outside the social media space have different online safety considerations, depending on the platform they administer and its functionalities. As primarily a search engine in addition to other platforms, Google Search has implemented a number of online safety policies to address harm on its service. Some of these policies include:
The Safe Search tool, which filters out explicit content in Google search results across images, videos and websites, and is a default setting for all signed-in users under the age of 13 with accounts managed by Family Link (a parent-established account which stipulates digital ground rules);
Removal of images of those under 18 years of age by request from the minor or their parent or guardian;
Removal of non-consensual explicit images by request; and
Content policies for particular features (such as autocomplete) to prevent dangerous or violent content from appearing in searches.12
3.14
In administering YouTube, Google Australia has focused on building products with built-in safety features specifically for children. Its two main products aimed at children are YouTube Kids, which is a specific child-centred app which uses filters and content moderation to provide safe content, and the Supervised Experience, which is available for parents using Family Link to enable children to access the main YouTube service but with adjustments to support children’s presence (including limiting types of advertisements, disabling video uploading, livestreams and reading or writing comments).13
3.15
Other safety policies applied by Google across the entire platform include:
Prohibition of ‘sexually gratifying’ content;
Prohibition of content that ‘endangers the emotional and physical well-being of minors’, which includes the sexualisation of minors, harmful or dangerous acts of minors, inflicting emotional distress, misleading content which is directed at minors but contains inappropriate themes, and cyberbullying;14
Prohibition of content encouraging dangerous or illegal activities, such as dangerous challenges, instructional videos on hurting yourself or others, and content praising eating disorders; and
Placing content warnings on particular kinds of content, such as content relating to topics such as suicide or self-harm.15
3.16
Google Australia also plans to disable certain functions for young people, including Location History.16

Twitter

3.17
Founded in 2006, Twitter is an American-based company, which runs a micro-blogging and social media service of the same name.
3.18
Twitter has a set of Rules and Terms of Service, which set out appropriate behaviour. Twitter describes these policies as ‘living documents’:
We're updating them every week and every month, given how rapid the changes are around these debates and around how we can move forward to make sure that women feel safe on the platform and that all vulnerable and underrepresented groups have a place and a voice until safe and welcome on Twitter.17
3.19
A non-exhaustive list of Twitter’s online safety features include:
Basic functions for users to control their experience, including blocking other users, reporting functions and privacy settings to prevent direct messages or Tweets from unknown users;
Changes to control over algorithms, including the ability to turn off the default ranking system;18
Policies in relation to particular forms of harm, including policies on hateful conduct;19
A Trust & Safety Council which develops products and features in addition to improving Twitter’s rules;20
A Tips function that makes recommendations to users to improve online safety and privacy;
The Safety Mode, which blocks a person from using their account for seven days in cases of Rule or Terms of Service breaches;
Education resources, including for parents, young people and vulnerable groups.21
3.20
In addition, Twitter utilises a number of accountability features, including:
A Responsible Machine Learning Initiative, which provides information in relation to the operation of its algorithms and how Twitter has been attempting to improve it;22
The publication of biannual Transparency Reports, containing information about Twitter’s enforcement of its Rules;
The creation of the Twitter Transparency Centre, which covers a broad range of topics such as information requests, removal requests, Rules enforcement and other matters; and
Features aimed at academics in order to facilitate open access and developments across a wide network of experts to improve online safety and technology. 23

Snapchat

3.21
Snap Inc. (Snap) is the parent company of the social media and camera-based app Snapchat. Established in 2011, Snapchat is an instant messaging service which differs markedly in its operation in comparison with other social media platforms such as Facebook and Twitter. This includes the lack of an open and uncontrolled News Feed, and a limited number of public spaces on the app, most of which are curated and pre-moderated by the platform. This, Snap argues, prevents the spread of harmful content to large audiences, and avoids the need to utilise artificial intelligence or automated moderation technology to detect harmful content.24
3.22
Snap stated that, in developing the Snapchat app, it has followed safety by design and safety by privacy principles from the design stage through to the operations phase of work.25 It noted that SnapChat is ‘designed for private communications (either 1:1 or in limited-size groups), with the aim of encouraging users to interact creatively with their real friends, not strangers’.26
3.23
Other safety features on Snap include:
Community Guidelines which prohibit certain kinds of content and outlining enforcement actions where breaches are identified;
Reporting tools for harmful content, monitored and actioned by a global Trust & Safety team;
A default deletion setting which provides that messages and Snaps are deleted from Snap’s servers once opened, and Stories on the platform are deleted after 24 hours;
Utilising technological applications to detect harmful content, including PhotoDNA and CSAI Match in relation to child exploitation material; and
Privacy features such as not displaying users’ friends lists to others and location sharing settings set to off by default.27
3.24
Snap also stated that it focuses strongly on the prevention of harm before it occurs. It explained that its focus on limiting content from public broadcast without pre-moderation, and preventing contact from strangers, enables the app to limit harm.28

TikTok

3.25
TikTok is a self-described ‘entertainment’ platform which focuses primarily on short videos uploaded by users. Originally Chinese-based as the app Douyin, the international version of the app (now TikTok) was released in 2017, and officially launched in Australia in 2019.
3.26
Similarly to Meta and Twitter, TikTok has emphasised the provision of user control over their experiences of the platform while also working to promote safety online.29
3.27
Safety features utilised by TikTok include:
Terms of Services and Community Guidelines, which set out the prohibited behaviours and content;
In-app and off-app mechanisms to report harmful content;
A mix of technological and human-initiated detection and enforcement of harmful content;
Providing information and guidance in relation to tailoring online experiences via its Australian Safety Centre;
Automated detection of inappropriate or unkind comments which will prompt users to reconsider posting the comment;
Parental control features, including a family pairing feature which enables parents and guardians to link their account to their child’s account and utilise certain content and privacy settings;
Youth-specific policies, including turning off direct messages for accounts owned by users between the ages of 13 and 15 years old, default privacy settings, and age restrictions on video sharing, Duet and Stitch functionalities.30

How does social media technology create harm?

3.28
Social media platforms are among the most prominent digital actors. The vast majority of Australians engage in social media in order to communicate, engage in business-related activities, and other activities. While there are doubtlessly positive functionalities embedded in social media platforms, concerns have been raised that certain elements of social media technology have the potential to cause harm and intensify existing harm.

Social media systems’ design

3.29
A common and strongly held sentiment heard from many witnesses was that social media services and platforms have almost universally not been designed with users’ safety and protection in mind. Rather, witnesses argued that social media platforms are primarily designed from a profitability perspective, which overrides the need to provide user safety.
3.30
Dr Hany Farid explained how the social media industry is reliant on extreme content to foster profitability:
You have to understand that social media—and I agree this is not entirely a social media problem but let me focus on that for a minute—give away their product for free. They are in the ad-delivery, engagement-driving business, which means engagement in and of itself is the product. As it turns out, humans are sort of awful, so the most hateful, salacious, outrageous, conspiratorial conduct is what engages. It's not that they're not able to do this; it's against their financial interests.31
3.31
This perspective was similarly expressed by Mr Peter Lewis, Director, Centre for Responsible Technology, who argued that the industry’s business model was essentially ‘not just providing the service but observing everything we do and then making money out of those actions’.32 He put the view that the industry is reliant on users spending as much time on its platforms as possible. Mr Lewis stated that social media companies were conscious that ‘if you wanted to totally maximise profits, you make your engagement as intense an experience as you can’, and that this behaviour was not necessarily in the public interest.33
3.32
Dr Michael Salter argued that the social media industry was designed primarily as a profit-generating business, rather than as a tool regulating the content provided to young people, which is reflected in its platform:
Social media has been designed and marketed to be particularly attractive to children and young people, but it was built without regard for user safety. The underlying business model aims to maximise profit by maximising the frictionless circulation of content and contact between users, with a minimum of expenditure on content moderation and oversight. This model has simply proved, over the last 25 years, to be incompatible with child protection.34
3.33
The Centre for Digital Wellbeing (CDW) put the view that social media platforms are driven by their profitability-based business models to ‘target and manipulate our social characteristics’.35 They further explained how this occurs and its impact:
The harms of this model are produced both through the algorithms they use to drive engagement and derive profits and through their design features such as filters, shares, likes and infinite scrolling. Such platforms engage us by hyperstimulating us and artificially producing validation. They feed belonging by generating collective outrage, and they cultivate and manipulate identity through algorithmically curated content.36
3.34
Ms Frances Haugen, a former Facebook (now Meta) employee who then became a whistleblower, explained that Facebook’s choice to use algorithms that promote extreme content is an example of the company’s prioritisation of profitability at the expense of safety.37 Another example was pointed to by Dr Hany Farid, who explained that Facebook’s banning of adult pornography was an instance where profitability overrode considerations of safety:
In the very early days of Facebook and YouTube, they banned legal adult pornography. When Mark Zuckerberg and Jack Dorsey tell you how much they love the First Amendment, would you please ask them why they banned perfectly protected speech. The reason they did it was that it was bad for business because advertisers don't want their ads running against sexually explicit material. So what did they do? They developed very good technology that, for the most part, keeps legal speech off Facebook and YouTube. It was not a technological problem.38
3.35
Victims of online harm also expressed frustration that reporting online harm would not lead to change in the platforms due to their business models being based on advertising and giving content as broad an audience as possible. Ms Erin Molan stated:
Even reporting things on Instagram, you feel like you're banging your head against a brick wall, because you look at their business model and […] advertising is the biggest thing for them. The more people they have on their account, the more advertising they get. They don't want to verify every single user on their account. They'd love one person to have 8,000 accounts because it gives them more people to sell to advertisers. So it just feels like you're banging your head against a brick wall. I can't see them ever doing anything off their own bat or ever cleaning it up themselves.39

Social media technology not being used for individual or group harm

3.36
The inquiry demonstrated that there appears to be a double-standard in how social media companies treat different categories of harm. Evidence to the Committee suggested that while social media companies have the technology and capabilities to appropriately address online harm when addressed at individuals or groups, this does not lead to those harms being adequately addressed.
3.37
As outlined above, many major social media companies have policies on their acceptable standards of behaviour in addition to methods of detection. It was put, however, that the detection and removal of particular forms harmful content and behaviours was not being utilised effectively.
3.38
The Committee observed during its examination of major social media companies such as Meta and Twitter that while these platforms were proactive and responsive in the detection and removal of certain forms of harmful content, such as misinformation and disinformation in relation to medical and electoral information, their responses to harm directed at particular individuals or groups were not as strong.
3.39
It was put to Google, for example, that while they heavily moderated content in relation to COVID-19 treatments as a form of misinformation and disinformation, there were examples of content containing abuse directed at public officials on YouTube which the platform had not removed when reported. Google responded by stating that, when reviewing complaints, its safety teams considered a range of contextual information, including whether the recipient of the abuse in question is a public official.40
3.40
Other social media services such as Meta were accused of not focusing its energies on sources of individual and community-based harm, and focusing on particular topics such as misinformation. In response to claims that Meta had removed the account of a public official due to misinformation and disinformation concerns, Meta stated that it had acted to enforce its policies which had been informed by expert and leading advice.41
3.41
Social media companies, often multinational corporations tending to numerous jurisdictions around the world, were also argued to be applying a ‘one size fits all’ approach rather than taking consideration of local laws and standards.42
3.42
A more holistic approach to consideration of online harm prevention was well articulated by Dr Kate Hall, Head of Mental Health at the Australian Football League (AFL), who said:
I think what we’re understanding about human behaviour is that, particularly for young people, or when people begin this type of behaviour, the peer and social norms of, I guess, guardrails are critical on making lasting change. Whilst a one-off deterrent might bring it to attention, those many other policy pieces and protectors in place then step in for something more sustainable, as an intervention in and of itself to deter others from this… I want to reiterate what Tanya said around unmasking and particularly the anonymity. I do think that’s a very critical piece of this puzzle about why this behaviour that is so harmful to others is able to flourish and to grow. It doesn’t have to be a very stringent act. It can actually start earlier, when people begin to engage in things that are perceived as harmful to others. We would want them held accountable early, instead of waiting for the behaviour to escalate to that point. All anti-social behaviour, I believe, at some point starts with a test. Then when there’s no action, it grows and becomes more and more harmful.43

Lack of a duty of care on social media platforms

3.43
Representatives from the social media and digital industry stated that many companies had implemented policies to protect users, but further action would require cooperation from multiple stakeholders.44 Ms Sunita Bose, Managing Director, Digital Industry Group Inc. (DIGI), explained that the lack of consistent standards across the digital industry made it difficult to ensure safety across all platforms, which DIGI was working to address through its work in drafting new industry codes of practice.45
3.44
As part of its inquiry into the adequacy of existing offences in the Commonwealth Criminal Code and of state and territory criminal laws to capture cyberbullying, the Senate Legal and Constitutional Affairs References Committee recommended that the Australian Government ‘legislate to create a duty of care on social media platforms to ensure the safety of their users’.46 In response, the Australian Government noted this recommendation, indicating that it would monitor developments in other jurisdictions regarding a duty of care at law for digital platforms.47 The Government’s response stated:
The Government considers that online safety is a shared responsibility, and that content and behaviour which is prohibited offline should also be prohibited online. The Government considers that social media platforms and other technology firms need to recognise that their responsibility for tackling harmful behaviours and content goes hand-in-hand with their influential and important position within Australian society. It is particularly important that industry participants whose products and services are used by children take appropriate action to uphold the safety of their users.48
3.45
The response further stated that, in the event that digital platforms and companies ‘fall short of community standards’, the Australian Government would consider how best to protect Australians online via regulatory means.49

Lack of effective detection practices for harmful content

3.46
Social media platforms generally have policies and systems in place for violations to its terms of service, rules and community standards. Evidence by multiple digital platforms indicated that most social media companies use a combination of common elements to detect and remove harmful content:
Detecting harmful content prior to its publication and distribution on platforms, utilising artificial technology (AI) and other forms of automated detection systems in combination with human oversight teams; and
Encouraging users to report content when it is found to prevent its proliferation and avoid causing further harm.
3.47
Most online platforms appeared to predominantly utilise automated and AI systems to detect harmful content at the first instance.
3.48
Effective tools to detect forms of harm, particularly in relation to CSAM and other extremely harmful material were argued by submitters to be critically important for addressing online harm in digital platforms. Nonetheless, some witnesses were critical of efforts being made by digital companies in detecting online harm.

Detecting child exploitation material

3.49
Witnesses drew particular attention and urgency to the issue of the detection of CSAM content, and the role of social media companies and digital companies. Dr Michael Salter, an expert in CSAM and online harm, put the view that social media platforms were being used by predators to contact and groom children and young people, while the platforms appear to not be able to address the issue adequately:
Social media companies tell us that they are just as concerned about child safety as we are, but the amount of child sexual abuse material reported to Australian and overseas authorities increases every year. Prosecutions for child sex exploitation offences in Australia are also increasing year on year. Abusers are using social media to circulate child sexual abuse material. They are using social media to contact and sexually harass children and to manipulate and extort them into producing nude or sexual content. Abusers are also using social media to connect with each other to create online abuse communities to justify their sexual interests and to publicly argue for policy reform that compromises child protection efforts.50
3.50
Dr Salter explained that, because social media and digital platforms make representations that they cannot adequately detect CSAM, some users (a number of which were victims of CSAM themselves) were witnessing and reporting CSAM to social media companies and authorities.51 He explained further:
Really one of the most distressing examples was this. I spent some time a couple of years ago working with a group of Twitter users who were child abuse survivors. Images had been made of their abuse. And they were on Twitter flagging child sex abuse material on Twitter. I have to say that the content that I saw when I was doing research with this group was the most serious content you can imagine on Twitter. It included videos of infant children being raped. It was absolutely horrific. This was content that was widely circulating on Twitter, and it was up to child sexual abuse survivors themselves to hunt down this content and report it to Twitter because there didn't seem to be any effective proactive measures by Twitter to take that content offline.52
3.51
In March 2020, the Five Country Ministerial (consisting of the governments of Australia, Canada, New Zealand, the United Kingdom and the United States) launched the Voluntary Principles to Counter Online Child Sexual Exploitation and Abuse (Voluntary Principles). The Voluntary Principles were designed in partnership with industry actors such as Meta, Google, Snap, TikTok, Twitter, non-government groups, academics and others, to encourage awareness of the issue and prompt action. The Voluntary Principle’ themes, which contain 11 principles, include:
The prevention of child sexual abuse material;
The targeting of online grooming behaviour;
The targeting of livestreaming for the purposes of child exploitation;
Preventing the accessibility of child exploitation material in search results and automatic suggestions;
Adopting appropriate and specialised safety measures for children in particular;
Responding appropriately to material found, including reporting options for users and alerting authorities; and
Collaborating and responding to evolving threats.53
3.52
Since the adoption of the Voluntary Principles, the Technology Coalition (a global alliance of technology companies, including Meta and others) have announced Project Protect, which establishes the commitment of member companies to address and invest in the prevention of child exploitation, designed to address work over the next 15 years.54 The Department of Home Affairs (Home Affairs), however, stated that ‘[i]n almost two years since tech companies endorsed the Voluntary Principles, there is limited evidence as to the degree of implementation and the level of success’.55
3.53
Google Australia stated that it was ‘committed to stopping the use of our platforms to spread child sexual abuse material (CSAM)’.56 Google explained that primarily used its Content Safety API and CSAI Match tools in detecting CSAM material, in addition to its Trust & Safety teams to address any incidents as they arise. It stated that it uses ‘hashes’ to automatically detect certain forms of harmful content prior to it being viewed. Its systems automatically remove content ‘only when there is high confidence of a policy violation’ and any borderline cases are flagged for review.57 It stated that this approach resulted in 94 per cent of removed content being flagged by its systems rather than humans, and almost 40 per cent of those videos detected by AI were never viewed.58 Further, Google Australia stated that it was developing new technology to assist in identifying CSAM content on its services.59
3.54
Like Google, Snap Inc. stated that it also utilises technology such as PhotoDNA and CSAI Match which assists in identifying known images and videos of CSAM.60
3.55
Meta provides limited publicly available information in relation to how it detects CSAM and other forms of child exploitation material. In Meta’s Community Standards Enforcement Report for Q4/2021 period, it stated that approximately 97.5 per cent of violating content in relation to child exploitation material was identified by Meta itself, while the remaining 2.5 per cent of material was reported by users.61 In evidence provided to the Joint Standing Committee on Law Enforcement, Meta asserts that it has developed two new kinds of detection technology for images and videos, which it has made open source for other platforms to use.62 This information is extremely difficult to find in the public sphere aside from this submission. Further, Meta does not provide any further details in relation to its detection practices aside from reference to its public commitments to reduce child exploitation material online.63

Detecting abusive content

3.56
Concerns were raised by a number of witnesses that the detection of harmful or abusive content more generally on social media platforms was insufficient to appropriately protect users from harm.
3.57
Witnesses to the inquiry reported three commonly experienced issues:
The failure of social media platforms to adequately uphold their terms of use, including community standards and policies;
The lack of detection by social media platforms of harmful content, including the failure of algorithms to detect all forms of harmful content; and
The subsequent difficulty in having it removed or seeking redress.
3.58
For example, footballer Ms Tayla Harris explained to the Committee that she has received a significant amount of online abuse which is gender-based, but that is not detected by social media platforms, and the removal of that content often requires substantial and lengthy engagement with platforms individually.64 The experiences described by Ms Harris were commonly experienced by many witnesses and submitters.
3.59
Twitter was asked about its approach to managing abuse and harassment at scale directed at individuals. It stated that it factored in two considerations when responding to this issue: examining how the platform may be incentivising actors that take advantage of the service and circumvent its policies and rules, and examining how it can reduce victims’ role in reporting abuse by utilising technology:
Under looking at the design of Twitter, we've really strengthened our policies around the platform manipulation … where we have not only added new teams but also got a lot better at using machine learning to identify when we receive some sort of swarm or sudden uptick in attention or abuse, especially targeted at specific or individual accounts. We are then able to then have that surfaced and flagged to our Twitter service or concept moderation team, which is able to look at this against our policies and then take coordinated action across the accounts that are participating in that behaviour instead of trying to do it at that one-off back and forth. We've seen really strong moves towards being able to take this down at scale and we've seen this actually result in fewer abuse reports having to be submitted to our teams to be able to see that this kind of attention needs to be given to a specific case or a specific group of accounts.65
3.60
The platform stated that it had also introduced a new feature which prompted users to reconsider posting a Tweet if abusive language or content had been detected, which was reportedly positively accepted by users.66 Twitter also noted that it has a function which enables people without user accounts to make reports if they become aware of inappropriate content on the platform.67
3.61
Twitter stated that, while it did not tolerate abuse and harassment on its platform, it did ‘allow for certain inflammatory or strident language’ depending on the broader context.68 In investigating a reported allegation of abuse or harassment using abusive language, Twitter stated that it would examine the conduct and whether it was compliant with its rules and terms and conditions. It would also look at factors such as whether the involved user accounts followed each other and other ‘behavioural signals’ which would provide a clearer picture of the situation.69 Twitter explained:
[T]here are instances where we have seen certain phrases or strident language that are sometimes used in a slang way, as a community or even sometimes as terms of friendship. Our former VP Del Harvey used to use the example that 'hey, b-i-t-c-h' could be seen as a greeting to a friend or could be seen as abuse, depending on the context. As you'll see, especially within a younger demographic, there are terms that are used in a way that might seem on their face to be abusive but at other times would be seen as appropriate language between people who know each other.70
3.62
Meta similarly argued that context and intent are important in assessing whether material met the definition of abusive material on its platforms.71 It outlined the ways in which it identifies abusive material on its systems:
We use human review and developed AI systems to identify many types of bullying and harassment across our platforms. However, as mentioned above, because bullying and harassment is highly personal by nature, using technology to proactively detect these behaviours can be more challenging than other types of violations. It can sometimes be difficult for our systems to distinguish between a bullying comment and a light-hearted joke without knowing the people involved or the nuance of the situation. That's why we also rely on people to report this behaviour to us so we can identify and remove it.72
3.63
Meta explained that it encourages its users to report any harmful content they identify, which then is assessed and ‘action the content consistent with our policies’.73 Meta also stated that it had recently begun investing in detection technology (such as forms of AI and automated detection technology) to identify and remove harmful content before it is seen and reported to the platform.74 The company reported that its AI systems were found to be increasing the percentage of material identified proactively, increasing to 59.4 per cent of all bullying and harassment material on Facebook from 25.9 per cent a year previously.75 The AI systems also proactively detected 83.2 per cent of all bullying and harassment material on Instagram.76
3.64
Google outlined its detection tools in its submission, although provided limited detailed information:
We have robust mechanisms to monitor compliance with our policies and to enforce our policies. We rely on a mix of human and technological intervention: we encourage all users to report content that violates our Community Guidelines; we have established the YouTube Trusted Flagger programme, by which individual users, government agencies and NGOs can notify content that violates our Community Guidelines; and we have developed machine learning classifiers to automatically and quickly identify and remove potentially violative content. Content that is found to violate our Community Guidelines is removed; in addition, enforcement may have repercussions for those who violate our policies and may result in channel or account terminations.77
3.65
Snap stated that it mostly uses a team of expert analysts to moderate content online, in addition to technological tools to detect abuse such as CSAM.78 It stated that, given that it is primarily a platform that facilitates communication between two people or small groups, it attempts to seek a balance between the detection of harmful content and respecting the privacy of users.79

Attacks on high-profile individuals

3.66
As outlined in Chapter 2, the Committee heard evidence from multiple high-profile witnesses, ranging from news journalists to disability advocates to professional football players, who had experienced online abuse via social media platforms. There is a vast range of online harm that is directed towards individuals who by virtue of their job, position, gender, race or other identification and how these identifications are used to subject people to severe and sometimes sustained online abuse.
3.67
Twitter stated that it was aware of trends that suggested heightened abuse and harassment directed at women in prominent positions, such as politicians and journalists.80
3.68
Some of the examples given of online harm directed at public figures include:
When I first started on The Footy Show I noticed some of the commentary—and I would never seek out the commentary written online; I learnt that lesson very early on. Things that were sent directly to me on platforms that I used professionally were just horrific. They were not things like, 'We don't like watching you.' They were things like: 'We will ensure you die. I will hit you with a bus.' These things were so horrific. What they said they would do to me if they ever saw me made me fear for my safety essentially and made me nervous about going outside. There was the detail. People would send me things that they would hope to do to either me or my child.81
Due to me speaking up in defence of my own company and my own sex-based rights, I have received death threats, rape threats, general threats of violence and countess instances of misogynistic abuse.82
3.69
When these issues were raised with various platforms, most made a distinction between online harm directed at individuals who had no public profile versus online harm directed at figures who had a high public profile due to their job. Most platforms commented that they had a higher threshold for the takedown of abusive content for public figures, citing freedom of speech as an explanation for this increased threshold. This was evident during the committee hearings with Google and Twitter:
Chair: Would this comment breach your community standards: would a user of your platform be censored or banned for calling a man or woman a 'whiny little b-i-t-c-h'? I would like a simple yes or no answer at this stage.
Ms Longcroft: I understand that you are referring to a particular case that relates to a public figure here in Australia. Again, those standards would have to take into account the context and the nature of the person who had made the comment. In determining any particular case—and I wouldn't comment on a particular case—those very clear policies would be met.83
In terms of any sort of specific phrases or terms, from a Twitter perspective, we do allow for certain inflammatory or strident language. That being said, the statement that you just read out, if it were in the context of targeting an individual or of crossing that threshold into abuse, we would of course review it, under the Twitter rules and terms of service, and take into account any account context. This would depend, again, on who was being @ mentioned, if these accounts followed each other and a number of behavioural signals that would allow us to understand fully what's going on in that situation.84
3.70
The Committee considers that there are two challenges with this approach by platforms.

Dehumanisation of public figures in public discourse

3.71
The first is that the different threshold for taking down abusive content online suggests is it acceptable to dehumanise public figures. Experiences of dehumanisation were cited by Ms Erin Molan, Ms Tayla Harris and the Committee Chair.
You never think that it will impact in the way that it does. As I said, I'm not just talking about people in the public eye, personalities and reality stars. They should all be afforded the same protection as everyone else.85
Chair: I have given a very direct example that didn't include the word b-i-t-c-h: 'cavorting whore'. Is there any context that you can think of in Twitter's hateful content policies under which that would be seen to be acceptable and not hateful content or abusive content? I will be honest; I can't think of one, but I would like to think of one if you know of one.86
‘Every single piece of evidence means this is the exact example of where someone thinks it's acceptable to make awful comments to a stranger online. They get away with that, and what are they going to get away with next? They test things.' That was something that brought the severity of the whole situation to life.87

An increase in online abuse towards women in public positions

3.72
The Committee is concerned that allowing a higher threshold for public figures could contribute to the increase in online abuse, particularly against female public figures. Twitter explained this growing tendency in online discourse:
Unfortunately, it is very common for women in public life to receive those types of threats. We're regularly talking to female journalists as well who are receiving those types of comments.88
3.73
Female public figures who presented evidence to the Committee noted that:
Giggle’s App Store and Google Play pages frequently receive ‘one star reviews’ full of misogynistic slurs, abuse and blatant lies. The goal of the posters is to discourage women from using an online female space and/or to have the female space removed from the market completely… In addition to the abuse towards Giggle, I personally receive online abuse due to my status as the founder and CEO of Giggle and for taking a public stance to protect female-only (single sex) spaces.89
It is an increasingly acknowledged reality that social media platforms host a huge amount of vile, threatening, violent, or sexually explicit and pornographic content. Much of this content is directed at women and is posted anonymously. In many cases this explicit and threatening content is publicly visible to children.90
The other thing I mentioned earlier was regarding being a woman. I've been a woman in a male-dominated field for many, many years. When I saw other people, potentially, talk out about this you would see so much commentary around it, that they were 'playing the victim' or 'playing the gender card'. It made me very reluctant to ever speak about this experience, because so much of the content and the things that were said to me had an angle that involved my gender. But the second you say that you get accused of playing that gender card, and that's something I never wanted to be accused of, I never wanted to do.91

Public versus private – abuse is abuse

3.74
A long trail of trauma, emotional suffering, reputational damage and sense of shame can be left regardless of whether abuse is directed towards someone with a public profile or not. Chad Wingard noted he felt how online trolls viewed him as a public personality and AFL player, rather than a person who played professional AFL as a job: ‘A lot of people say that that comes with being an AFL player. But being bullied or discriminated against is not in the job description’.92
3.75
The National Mental Health Commission (NMHC) outlined that the impact of abuse is not lessened because an individual is well-known or because they have a prominent job or title:
I go back to the point that the research is showing, which is that, when you are looking at issues of harm, it is highly specific to the individual and their usage. It's very highly specific. Of course, we all have vulnerabilities. We all have strengths. We all have different antennae. So I think it is very complex in that way. That would be my point. As I say, that's why, if I were looking at a cultural shift here, I would move very strongly to a 'do no harm' space and then look at what we need to do to shift towards there.93
3.76
The allowance of this behaviour, coupled with the reasoning from Big Tech for the two sets of standards is of concern to this Committee. Many witnesses cited free-speech, context or legitimate dissent or disagreement as the reason to allow abusive content to stay online.
Social media corporations regularly fail to take action over direct threats of violence, wishes of harm against women, and threats of sexual assault against women. In some cases, this content is left online by social media companies despite being in breach of their own policies, while substantial effort is placed into moderating or banning other content which contains no threats or abuse.94
3.77
Ensuring there is a consistent standard applied by social media platforms when removing harmful or abusive content online will assist in reducing the proliferation of online harm and help to drive cultural change and improve the standard of public discourse online. This view was shared by various witnesses:
It feels that the disconnect has been that social media companies have written their own rules, where other publishers and businesses that are disseminating information and creating content have a different set of rules and standards in the community and the wider sector that they have to obviously work within.95
The impact of sexist, misogynist, ‘gendered hate speech’ (GHS) attacks or abuse of women, both in social media and online, have a significant impact. D’Souza et al state that: “The direct effects of GHS on the individual targets are neither trivial nor inconsequential. GHS has lasting impacts on women in terms of both their mental health and ability to participate in society free from fear.96

Online hate

3.78
Hate, including discrimination and hate speech, on online platforms was put as a significant issue facing users. As outlined in Chapter 2, users from particular backgrounds are more likely to experience discrimination and hate speech, including those from culturally and linguistically diverse (CALD) backgrounds, women, people with disability, migrant and refugee groups, and Aboriginal and Torres Strait Islander peoples.
3.79
Twitter stated in its submission that it had adopted a much broader definition of hate speech than currently exists in the Racial Discrimination Act 1975 (Cth) (RDA). It stated that the RDA currently limits hate speech in relation to racial discrimination only, while Twitter had opted to include other categories of hate speech, such as in relation to sexual orientation, disabilities, and gender.97
3.80
Twitter further explained that it had conducted public consultation in relation to some of its policy updates, which informed some of these changes:
That recommendation … came about during our dehumanisation updates to our hateful conduct policy. We had a multistaged approach for updating our hateful conduct policy that stepped through a number of marginalised communities, vulnerable communities and areas where we were seeing the contours of the online conversation had changed and were starting to not meet up with community expectations, where there was the ability for people to control their own experience or to report and have taken down content from Twitter. During these dehumanisation consultations we worked with a number of organisations here in Australia … We received a lot of feedback that, while there was hateful conduct being directed to certain individuals, it was the group conversations and the chronic abuse that vulnerable communities were suffering that was most harmful to them. This was directly inputted into the update that we had around racism and national origin and ethnicity updates to our dehumanisation policy.98
3.81
Twitter argued its Terms of Service and Rules are clear in relation to how it views discrimination:
With regards to our hateful conduct policy, we are committed to combating abuse motivated by hatred, prejudice or intolerance, particularly abuse that seeks to silence the voices of those who have been historically marginalised. The policy makes clear that no one on Twitter may promote violence against or directly attack or threaten other people on the basis of race, ethnicity, national origin, caste, sexual orientation, gender, gender identity, religious affiliation, age, disability, or serious disease. We also do not allow accounts whose primary purpose is inciting harm towards others on the basis of these categories.99
3.82
Meta similarly argued that its definition of hate speech was significantly broader than the definition provided by Australian legislation. Meta stated that it defines hate speech as ‘a direct attack against people on the basis of what we call protected characteristics’, which include race, ethnicity, place of origin, disability, religious affiliation, caste, sexual orientation, sex, gender identity and serious disease.100 ‘Attacks’ were defined by Meta as:
violent or dehumanising speech, harmful stereotypes, statements of inferiority, expressions of contempt, disgust or dismissal, cursing, and calls for exclusion or segregation.101
3.83
Meta’s latest Community Standards Enforcement Report for the period between July to September 2021 indicated that the company took action against 22.3 million forms of content including hate speech, 96.5 per cent of which was detected before it was viewed by people.102 Meta also indicated that the prevalence of hate speech on its platforms was 10 to 11 containing hate speech out of 10,000 views of items of content, a rate of roughly 0.7 to 0.8 per cent.103 Meta noted that hate speech was recognised as being exceptionally difficult to track via AI as it is ‘dependent on nuance, history, language, religion and changing cultural norms’.104
3.84
Nonetheless, witnesses raised concerns regarding Meta’s commitment to identify and remove hate speech. Ms Frances Haugen, former Facebook employee, stated that Meta could identify and remove significantly more content if it chose to invest resources appropriately. She said:
A thing that might not be perfectly obvious is why Facebook takes down so little hate speech, and doesn't even take down hate speech in that many languages. There are 5,000 languages in the world, and a lot of the most fragile places are linguistically diverse. There are also spelling differences. American English is not the only version of English in the world, as you know. AI is not very smart. If you invest enough effort and you use the right techniques you can get very precise classification systems. But you're always forced to trade off between what fraction of the things do you want to catch and how often do you want your judgement to be wrong? Facebook has tried to be conscientious and avoid taking out content that is not violating. I think they genuinely value freedom of speech a great deal. But, when you look at those trade-offs, the trust they experience today at their current level of investment, they could invest more and take down more and still be more accurate and make fewer mistakes. But, if Facebook has to choose based on the systems that it has today, how much of the hate speech that exists do they want to take down and are they are willing to be wrong one in 10 times, one in 100 times, one in five times? They can take down more if they're willing to make more of those mistakes.105
3.85
Ms Haugen also noted that a considered understanding of what constitutes hate speech must accommodate significant amounts of subtly and nuance, which is extremely difficult for AI to capture. Using an example of a statement expressing that ‘white paint colours are the worst’ at a hardware store, she explained that a person would understand that the context and recognise that it was not hate speech, which a computer would not identify.106 Ms Haugen argued that an example such as this demonstrated that removing harmful content was not necessarily practical, whereas reducing the systematic amplification of harmful content was more achievable.107

Algorithms on social media and other digital services

3.86
Algorithms are regularly cited as a digital feature used by many platforms which can be harmful. Meta described algorithms simply as ‘just a set of rules that help computers and other machine-learning models make decisions’, which are used in its systems to ‘rank and distribute content’ such as in its News Feeds on Facebook.108
3.87
Home Affairs defined algorithms in the context of the digital industry as being ‘used to selectively predict the information that a user is more likely to engage with based on information about the user, such as location, past click-behaviour and search history’.109 Home Affairs explained that algorithms are primarily used on social media to ‘target users with content that appeals to their interests (filter bubbles)’.110
3.88
Google Australia explains why it uses algorithms and how it works on its platforms:
With the vast amount of information available, finding what our users need would be nearly impossible without some help sorting through it. Google’s ranking systems are designed to do just that: sort through hundreds of billions of web pages and other content in our Search index to present the most relevant, useful results in a fraction of a second.
To give users the most useful information, Search algorithms look at many factors and signals, including the words of their queries, relevance and usability of pages, expertise of sources, the user’s context, such as their location and language, and settings. The weight applied to each factor varies depending on the nature of their queries. For example, the freshness of the content plays a bigger role in answering queries about current news topics than it does about dictionary definitions.111
3.89
As an example, Google Australia explained that YouTube’s algorithms take into consideration signals in relation to a user’s account (e.g. their watch history, previous searches, and their location), and will be overridden by YouTube’s signals to reduce recommendations of harmful material where necessary.112 Similarly, Meta stated that it uses algorithms in a variety of ways in its platforms to filter and rank content, some of which are designed to identify and remove harmful content.113
3.90
Concerns have been raised in relation to the ways in which algorithms can ‘up score’ negative or dangerous content. Home Affairs outlined its concerns that the selective prediction technology used by algorithms effectively encourage users to be isolated within particular viewpoints or ideologies, which can ‘fuel racism, violence, and extreme political and/or narratives’.114
3.91
Home Affairs explained that algorithms utilised ‘persuasive technology’, which provides psychological rewards for engaging with the platform and ultimately encourages users to become reliant on the technology. It stated that persuasive design techniques are:
design approaches conceived by human developers to maximise a user’s engagement, usually tapping into social rewards and psychological behaviour, such as social reciprocity on social media platforms; pull-to-refresh content; or “streak rewards” in games or social messaging platforms. These designs reinforce and reward behaviours, such as people “liking” or “sharing” content. By expanding this and “pushing” content at users via notifications and reminders, people are being conditioned to constantly engage with and even rely on these platforms and services, becoming digitally dependant.115
3.92
Reset Australia highlighted the three forms of harms that can flow from unmoderated algorithms in content prioritisation:
societal harm (the over-promotion of divisive content);
community harm (the tendency towards racism, sexism and other forms of discrimination); and
Individual harm (including the normalisation of potentially harmful content).116
3.93
The Centre for Digital Wellbeing made a similar point, noting that algorithm-driven social media is creating harm at both the individual and societal levels:
The impact of such platforms exists at both the individual level and the aggregate societal level. It is important to see the effect of social media on individuals as a continuum. Much of the focus of regulation has been on behavioural events, such as bullying, harassment or digitally enabled abuse. But, for a generation, the risks of usage include addiction, depression and anxiety. These impacts appear to be gendered, with young women most at risk.
At a societal level, we are only beginning to understand the full influence of mass algorithmic engagement, but it is clear that social media is increasing polarisation and division. Such platforms are also the perfect conduit for the spread of misinformation and disinformation. The potential societal impacts are perhaps the most profound, yet to date much of the regulatory approach has focused on individual harms.117
3.94
Individual harm can also flourish in other ways due to algorithms. An example of harm online was raised by Eating Disorders Families Australia, which highlighted how pro-eating disorder content was easily found online on social media services, and often suggested by the platforms’ algorithms which make recommendations based on a user’s interests. They pointed to TikTok’s algorithm as particularly harmful, arguing that the platform tended to target young people’s accounts in recommending ‘diet’ content, and would offer increasingly extreme content over time.118
3.95
Eating Disorders Families Australia explained the impact that algorithms promoting harmful content could have on vulnerable users:
For example, if a user watches a “pro ana” (i.e., pro Anorexia) or “pro mia” (i.e. pro Bulimia) video, then they are likely to be supplied with more weight loss and “thinspo” (i.e. content to encourage them to lose weight) content, again resulting in validating and triggering behaviour which is known to intensify the deleterious impact of eating disorders. The impact of these social media sites is exacerbated by the fact they are visual and comparative in nature as well as encouraging users to be competitive in their postings, all of which are inherently problematic for young people battling with eating disorders.119
3.96
Witnesses also pointed to evidence suggesting that social media platforms are often aware of the harm algorithms cause but are unwilling to address it. Home Affairs stated that digital companies may be aware of the harm that algorithms cause in terms of promoting harmful or extreme content, but that this is overlooked in the interests of increasing viewership and, resultingly, revenue.120 Platforms such as YouTube have also been found to have significant issues in relation to the way they use algorithms. The Australia Institute observed that, while there are documented harms in relation to the radicalisation of individuals via YouTube, the company has been unwilling to explain exactly how the algorithms work or provide access to enable broader understanding, and thus this remains an issue.121
3.97
Ms Frances Haugen stated that the social media business, and in particular Meta, is heavily reliant on large numbers of users generating and consuming content in order to maximise its profitability. She stated:
Right now, Facebook is dependent on very, very large groups, like 500,000, groups of millions of persons, to fill people's newsfeeds with enough content. The amount of content people were producing on Facebook when it was just about their friends and families was enough that people could get online, spend 30 minutes, an hour, catch up with their friends and then go and do something else with their lives. If Facebook wants to make more and more money every year, they have to keep you on their site longer and longer. The business model becomes the problem then, right? Once you start being dependent on these hyperamplification notes—when you have a group that has half a million people, a million people, it's not: say something offensive and there are 20 people in the room, with at most 20 people seeing that content. If I say something offensive in a room that has a few million people, the algorithm has a bias that the more extreme the content is, the more people it will reach. Suddenly, the thoughtful response to my thing doesn't get shown to two million people. That's not extreme enough. But my extreme thing goes out to two million people.122
3.98
The Australia Institute pointed to the example contained in documents, leaked by Ms Haugen, which described how Facebook’s algorithms ‘promoted posts which provoked angry reactions, as they generated more engagement than those which generated positive or neutral reactions’.123 The papers suggested that employees tried to resolve this issue but was prevented from doing so by senior management, including Meta CEO Mark Zuckerberg, citing concerns that ‘any intervention would lead to less engagement’.124 Further, The Australia Institute noted that platforms such as Facebook were said to be overly dependent on AI technology rather than human detection of harmful content.125
3.99
Dr Salter explained that algorithms effectively suggest material to paedophiles on social media platforms (particularly YouTube) because it detects groups of users who are consistently looking at sexually explicit material of children and makes recommendations for similar videos. He put the view that this creates an ‘alternate reality’, where paedophiles are being promoted sexually explicit content of children which are not visible to others.126
3.100
Further, Dr Salter said because many social media platforms are reliant on users reporting harmful content, users that are accessing sexually explicit material of children are unlikely to report it, which creates a void space where the content is not being monitored.127 He provided an example of this situation in the context of YouTube, previously discussed in Chapter 2:
… on YouTube the algorithm was detecting a group of users that were only looking at this potentially sexually explicit content of children, and the YouTube algorithm was then automatically generating a playlist of videos of children dancing, children doing gymnastics and children in swimwear. So, effectively, the algorithm has curated a paedophile playlist that is only visible to paedophiles … Far too often, what we're seeing occur on social media—and this has been true on Twitter; it has been true on other platforms—is that the social media algorithm is creating the parallel social media universe for child abusers, but the social media response system is reactive: it requires users to detect and report inappropriate behaviour. Well, if you're part of a community of child abusers on YouTube or Twitter or TikTok or wherever, you're not going to self-report the problematic content. So they're moving into these algorithmically-created stratums of inappropriate child content that the rest of us actually can't see. They're in a sort of a parallel universe.128
3.101
Algorithms were said to encourage users to access material that matches their worldview, effectively encouraging narrow-mindedness which can lead to intolerance. Professor Third suggested that algorithms can perpetuate discrimination due to their promotion of views and opinions that match the user’s own rather than a more diverse range.129 She explained that providing variety to users is essential for societal wellbeing:
One of the big challenges I see is that it's very easy for children and young people to go online to be exposed to views that are of the same kind. It's really important not just for each individual to grow up and see themselves in digital media, or for them to learn about other cultures and so on, but for the health of our democracy that our children receive very diverse forms of information via a variety of channels and platforms. I know there's some very interesting work underway by some of the platforms to think about how we can use algorithms creatively to make sure we are serving up a diverse media diet. By increasing the diversity of not just children's but also adults' diets—you'll hear a regular theme here that some of the things we need to do for children also need to be done for adults—we are teaching them to understand people and giving them opportunities to be empathic about other people's experiences and so on.130
3.102
A similar point was also made by Ms Haugen, who used the example of a study into Instagram’s algorithms in relation to eating disorders to demonstrate how the technology can lead young people towards harmful behaviours:
You can go and follow moderate interests like healthy eating and healthy recipes on Instagram and just click on the content they provide you and you will get pushed towards eating disorder and self-harm content. Do it each day. Click on the first 10 things. Wait till the next day and do it again. You'll be shocked how fast the algorithm pushes you to more and more extreme ideas. If you were someone who was feeling depressed—you're a teenager, you're 16 years old, you're kind of struggling in school; maybe you're feeling kind of awkward and you start self-soothing by consuming Instagram. I think it's called doomscrolling, you're feeling stressed, so you just keep scrolling. If the content is what's making you depressed, that is dangerous, and saying that the solution to having an addictive product is having a tool that the addict has to pick to enable, that doesn't seem like a scalable solution.131

Addressing harm from algorithms

3.103
Some witnesses cautioned that algorithms are not inherently dangerous and can have positive functions if used appropriately. Home Affairs stated that many digital platforms currently already use algorithms to identify and stop child grooming conversations. It noted, however, that recent studies indicated that less than 40 per cent of companies in its survey utilised that form of technology to detect child exploitation. Further, Facebook’s Friend Finder function was said to have been ‘exploited by child sexual abuse live streaming facilitators to connect them with offenders’.132
3.104
In addressing concerns regarding algorithms, Meta has issued Content Distribution Guidelines which provide information on the forms of content that do not violate Community Standards but will not be distributed prominently due to problematic or low-quality content.133 Further, Meta stated that it has created several features to enhance transparency for users, such as the ‘Why Am I Seeing This?’ function, and increased control over algorithms.134
3.105
More broadly, Meta observed that critics of social media algorithms argue that such systems are designed to encourage provocative or sensationalist content. In addition to arguing that its systems do not reward such content (and often work to reduce it), Meta further stated:
The reality is, that it is not in Meta’s interest — financially or reputationally — to continually turn up the temperature and push users towards ever more extreme content. The company’s long-term growth will be best served if people continue to use its products for years to come. If our company prioritised keeping people online an extra 10 or 20 minutes, but in doing so made them less likely to return in the future, it would be self-defeating.
Additionally, the vast majority of Facebook’s revenue comes from advertising. Advertisers don’t want their brands and products displayed next to extreme or hateful content.135
3.106
Twitter disagreed with the idea that algorithms are inherently problematic, commenting:
Algorithm amplification is not problematic by default – all algorithms amplify. Algorithmic amplification is problematic if there is preferential treatment as a function of how the algorithm is constructed versus the interactions people have with it.136
3.107
Notwithstanding this point, Twitter stated further that it had made changes to its algorithms over time, enabling users to change the default setting so that users only see a reverse-chronology of Tweets rather than the algorithm-generated ranking. It also indicated that further developments to its algorithms were being made, with the goal of providing more choice to users in terms of how they wish to see content displayed.137

Algorithm regulation as a blunt tool

3.108
A number of witnesses were critical of the idea of issuing strict or blanket regulation on algorithms, arguing that they had significant benefits. DIGI stated that algorithms were critical for the function of many online businesses and platforms, such as online mapping and word processing.138 Automatic flagging of harmful content can be a function of algorithms, with recent reports highlighting that 95 per cent of videos containing harmful content were detected on YouTube.139 Further, detection algorithms are used by social media platforms in relation to user behaviour, such as Twitter’s detection of end-users abusing or harassing others, or Meta’s algorithms which can find indicators of users at risk of self-harm or suicide.140
3.109
Digital Rights Watch (DRW) cautioned that the regulation of algorithms, including automated content moderation, could cause harm to those who have their content or accounts removed mistakenly. DRW argued that most digital platforms generally use automated technology to moderate content, which may result in certain forms of innocuous content being flagged, such as content produced by minority groups.141
3.110
DRW noted that automated content moderation is effective for some forms of media, such as images or videos, but is yet to effectively moderate audio or text content. Harmful content that is a mixture of image, audio, video and text, they argued, may not be flagged by automated processes.142
3.111
Dr Michael Salter also defended the use of algorithms in certain circumstances, arguing that algorithms have the capacity to be used for positive purposes. He stated:
Algorithms and algorithmic detection have a critical role to play in child protection and that role could certainly be expanded. For example, there are automated detection systems for child sexual abuse material. Technology companies are not obliged to use them, although they are very effective. There are also algorithmic detection systems for inappropriate words, word combinations, emoji uses that can be used to detect child grooming—for instance, through message functions. Again, the technology companies at the moment are not obliged to use those sorts of detection systems. There is a lot that could be done at the back end and at the design end to make these products child safe. All that I would ask is that any service delivered to an Australian child, whether it's online or offline, does not come with the predicted risk of that child being raped or sexually abused.143
3.112
This point was also raised by the CRF, who suggested that it was within the power of social media companies to incorporate this, as they have previously demonstrated this capacity:
We saw how quickly social media platforms removed news media overnight. Do you remember? The question I ask is: Why can't they do the same in relation to online child exploitation? Why can't there be an absolute focus to remove that content from their sites. They have the AI technology. They have the ability. This is a continuing question that I ask of these various different providers.144

End-to-end encryption

3.113
End-to-end encryption (E2EE) is defined by the Office of the eSafety Commissioner (eSafety) as ‘a method of secure communication that allows only the people communicating with each other to read the messages, images or files being exchanged’.145 It is a widely used feature of apps such as WhatsApp, Signal and Skype, which utilise messaging and VoIP telephony services.146 Some platforms such as Meta have indicated an intention to move towards full encryption of its services.147
3.114
E2EE is well-recognised for its capacity to provide secure and private communications, but has significant risks associated with it. eSafety’s assessment of the risks includes issues such as:
Avoiding scrutiny by law enforcement agencies for illegal activities, including online child sexual abuse, as detection technology generally does not work on E2EE systems, which may facilitate the production and distribution of CSAM; and
Creating difficulty for law enforcement in detecting CSAM if social media companies begin to offer more E2EE services.148
3.115
Home Affairs expressed similar criticisms of the technology, arguing that E2EE is increasingly being utilised by criminals in order to avoid detection by law enforcement. Home Affairs argued that the ‘increasing normalisation of these technologies on digital platforms, including social media, is bringing Dark Web functionality to the mainstream’.149 It further outlined its concerns about the proliferation of E2EE functionality:
While strong encryption plays an important role in protecting user privacy and data, the use of this technology in some settings, particularly on platforms used by children, brings with it important public safety risks. The application of end-to-end encryption across social media messaging services – such as expansion beyond the current opt-in services proposed by Meta (including on platforms such as Messenger and Instagram Direct) – will provide predators with the ability to evade detection as they connect with multiple vulnerable children anywhere in the world and develop exploitative grooming relationships. The nature of end-to-end encryption means that not even Meta, as the hosting company, would be able to retrieve or view these messages in order to detect child abuse, even under a lawfully issued warrant. The anonymity afforded by end-to-end encryption not only enables predators to groom victims on a social media platform, it also allows these criminal to safely connect and share tactics on how to perpetrate child sexual abuse, share explicit images, arrange live streaming of child sexual abuse through facilitators in vulnerable countries and avoid law enforcement.150
3.116
Further, Home Affairs stated that, when it had raised concerns, social media companies such as Meta expressed ‘a degree of seeming indifference to public safety imperatives, including in relation to children’.151
3.117
Dr Michael Salter similarly stated that E2EE poses a significant and dangerous threat to efforts addressing the detection of CSAM:
At the moment the policy settings not just in Australia but internationally effectively disincentivise social media companies from becoming aware of illegal or harmful content, because once they are aware we start to see the expansion of their legal exposure and their potential liability. And so, as a result, it becomes in the interests of social media companies to make it difficult for them to know about illegal content. When we look at the move of, say, Facebook towards end-to-end encryption of the Messenger function, and in fact Twitter has also articulated an interest in encrypting its direct message service, then effectively what this does is it creates a black box in which the service cannot see any illegal content that's being exchanged and as a result they're not legally liable for that content. There is no incentive for them to proactively seek out and remove that material. That's not just true of social media, it's true of file-hosting services, it's true of a range of electronic service providers. The sheer amount of child sexual abuse material is increasing every year in the order of 50 per cent simply because this material is proliferating across services that have no legal obligation to proactively detect and remove that content.152
3.118
The CRF similarly took issue with the use of E2EE to enable offenders to avoid detection, stating that law enforcement would be significantly hindered if social media companies were to move towards E2EE services:
End-to-end encryption is skewed towards providing greater privacy to adults at the expense of safety for children. In 2019, Interpol joined a list of law enforcement agencies in arguing that criminals hide behind the technology and that technology companies should be doing more to grant law enforcement agencies access across these channels. Law enforcement would have an incredibly reduced capacity to identify online child sexual exploitation offences without platforms proactively reporting instances to the National Center for Missing & Exploited Children. With no technological exception to end-to-end encryption, the dehumanising abuse of children, who will be left as collateral damage, will continue undetected. Their abusers and the people who trade the images and videos of abuse will be protected.153
3.119
Yourtown raised a similar point, noting that E2EE technology is primarily concerned with providing privacy, which does not align with the need to detect CSAM, and arguing this contrast needed to be addressed prior to any rollout by social media companies towards E2EE:
There is an argument that that affords privacy, but that needs to be balanced with minimising significant harms to children who may be subject to abuse and exploitation by end-to-end encryption processes. There needs to be always the overarching principle that the best interests of the child should be paramount in any design considerations. With some of those more secretive applications, there should be an opportunity for law enforcement, or indeed the providers, to be able to access information if it's in the best interests of the child and it's going to respond to significant safety concerns related to that child.154
3.120
In addition, Home Affairs noted that the technology industry had demonstrated that it was possible to detect CSAM material on E2EE platforms, such as features rolled out by Apple in August 2021 utilising ‘hash’ technology, similar to that used by Google (see above). It stated, however, that privacy concerns had been raised in relation to this technology, and that this may stop digital platforms implementing these features if there is backlash from the community.155
3.121
The polarisation of the debate regarding E2EE was raised by Dr Salter, who expressed concern that many organisations and groups had the view that ‘all online data should be a black box that cannot be penetrated by law enforcement or by government’.156 Home Affairs also highlighted this issue, arguing that while the debate regarding E2EE had become increasingly polarised, it was not a question of an ‘all or nothing’ approach. It argued that the digital industry was capable of developing technology and software which could scan for CSAM and other types of harmful content, and pointed to the example of Apple’s new NeuralHash technology which detects known CSAM on iOS devices before it is uploaded to iCloud Photos.157
3.122
In February 2020, the United Kingdom-based National Society for the Prevention of Cruelty to Children (the NSPCC), in conjunction with over 300 child protection experts internationally including Dr Salter, wrote an open letter to Meta CEO Mark Zuckerberg, outlining its concerns with certain Facebook and Instagram features currently being proposed. In particular, the NSPCC cautions against the use of encrypted messaging and integration into large open platforms, which it states would increase risks to children due to heightened exposure to abusers. 158
3.123
The NSPCC called for Meta to adopt five key measures:
Investment in safety measures showing that children’s safety will be protected in the event that E2EE is introduced, including a capacity for Meta to scan for child abuse images and intervene where abuse is detected in its products or services;
Indicate a commitment to instigate a voluntary duty of care in relation to the protection of children when designing encryption functions;
A consultation process with child protection experts, governments and law enforcements;
Sharing data with governments and child protection experts to ensure risks have been mitigation and how abuse behaviours have been impacted by the introduction of such technology; and
Delaying a rollout of E2EE on Meta platforms until all mitigation strategies have been tested and can adequately address the concerns of child protection experts. 159
3.124
Further, Home Affairs advised the Committee that when concerns had been raised with the platforms regarding the risks posed by E2EE, the companies had demonstrated that their priority was privacy rather than harm mitigation:
The Department has ongoing concerns that digital platforms are prioritising privacy to the detriment of public safety. … For example, end-to-end encryption provides limited advantages over and above network level encryption. In the case of Facebook Messenger for example, end-to-end encryption will only apply to the content of messages, which has less commercial value to the company. The Department understands that personal data, such as metadata and site and cookie tracking, could still be exploited by Meta for commercial purposes, in line with their business model.160
3.125
Conversely, some witnesses were strongly in favour of E2EE being implemented more broadly. DRW argued that E2EE is critically important for online safety. While acknowledging the concerns regarding CSAM, DRW highlighted that encryption ‘provides everyone with digital security, and protects everyone from arbitrary surveillance by malicious actors and cybercrime (e.g. identity theft)’.161 Further, encryption was framed by DRW as a means of providing protection to vulnerable online actors, such as survivors of family violence who utilise encryption to protect information about escape and safe relocation.162

Privacy, anonymity and online harm

3.126
A topic of concern for all users of social media is the question of how users’ data is treated and protected by digital companies. A number of topics were presented to the Committee which touched on issues of privacy, anonymity and how these issues link with online harm. This section examines these issues.

Anonymity and pseudonymity of online abusers

3.127
Anonymity and pseudonymity are used by many people when engaging in online activity in order to hide their true identity. Methods of hiding one’s identity include:
Providing false or no information to other individuals online about their personal characteristics such as name, age, or gender;
Using an anonymised or pseudonymous account with minimal information provided to digital services in order to obtain access; and
Mimicking or stealing another person’s information (including their image).
3.128
Witnesses were sharply divided in their views of online anonymity, depending on the issue in which witnesses contextualised their responses.

Arguments for anonymity/pseudonymity

3.129
For many people, anonymity online is a necessary border between their online and offline lives. It allows people, particularly vulnerable people, the opportunity to engage with others online without worrying about the offline impacts of their online presence and in many cases encourages and enhances online participation.
3.130
The NMHC noted that they were not aware of any research conducted in relation to anonymity online and its impact on social connectivity.163 Nonetheless, it observed that anonymity could be used positively or negatively, like social media and online platforms in general. The NMHC further argued that one of the main attractive points for clients of services such as Lifeline or Kids Helpline was its assurance of anonymity, which operates as a ‘critical entry point’ into mental health care services.164
3.131
WESNET emphasised the importance of online anonymity to women in dangerous situations, noting that ‘Many people who do not use their real names on social media may have legitimate, non-nefarious reasons, such as people fleeing domestic violence’. WESNET highlighted an article by Dr Belinda Barnet, senior lecturer in media and communications at Swinburne University, who warned that any attempt to remove or reduce online anonymity could ultimately reduce safety: ‘anonymity is important to your physical safety. Attacking anonymity on social media won’t stop trolling, but it’ll put sections of our community in danger’.165
3.132
DRW argued that allowing internet users to hide their identity is a vital aspect of digital interaction, which promotes freedom of speech and personal autonomy and empowerment:
On a societal level, anonymity and pseudonymity online play an essential role in the functionality of the free and open internet, and enable political speech online which is integral to a robust democracy. On an individual level, the ability to be anonymous or use a pseudonym allows people to exercise control and autonomy over their online identity, to uphold their privacy. Anonymity is often an essential tool to protect individual safety and wellbeing. Any attempt to reduce the ability for people to be anonymous or pseudonymous online would undermine the above factors, and likely lead to increased long-term harm.166
3.133
DRW outlined a number of other reasons why people would use anonymity online, including:
Building communities online, especially in relation to marginalised groups such as the LGBTQIA+ community, people with disabilities, and ethnic minorities;
Seeking information in relation to stigmatised health conditions;
Victim/survivors seeking help in relation to domestic and family violence; and
People in a public-facing role (such as medical staff, social/youth workers, lawyers, teachers, and so on) who wish to have an online life without being tracked down or contacted in relation to their work.167
3.134
Ms Carly Findlay AM noted that anonymity is necessary for many people, while the use of real names does not lessen the likelihood of users abusing or harassing others:
Not all anonymous people are terrible. A lot of them have to use a pseudonym because they need to be protected legally, or there might be another reason for that. There's that. I definitely found that people who put their name to it are kinder. Also I've seen that people who put their name to things are really hateful.168
3.135
The Australian Information and Privacy Commissioner, Ms Angelene Falk, emphasised that anonymity is a key online principle:
If you're a person who's experienced domestic violence and are wishing to access support online being able to do so without using your real name will be a way of keeping your identity and location private, in turn ensuring your safety. So it's an important privacy feature. It can also be a safety feature.
In terms of a difference of views as to how anonymity ought to be played out in the online environment, we know, as I said, that anonymity and pseudonymity can actually ensure safety in certain contexts and that that privacy right can enable other rights and freedoms to be exercised—like freedom of speech, freedom of association and so on. One of the things I know the community is concerned about is the proliferation of abusive content online and the ability of people to engage in an online environment using a pseudonym. Again, we are talking about some very complex social policy issues and how to strike that right balance. From a privacy perspective, it's an important privacy right and it ought only to be displaced where there's a real evidence base that it's necessary, reasonable and proportionate to achieve some other policy objective.169
3.136
The right to anonymity online is included in the Australian Privacy Principles (APP): ‘Individuals must have the option of not identifying themselves, or of using a pseudonym, when dealing with an APP entity in relation to a particular matter’.170
3.137
Similarly, the Attorney-General’s Department highlighted that there are many reason for anonymity online, and the Australian Government considers it important to protect that where it does not lead to harms:
… there's a fine line between disincentivising defamatory comments online and not seeking to have a chilling effect on legitimate and appropriate comments online, including legitimate comments made anonymously. It's not the intention of government to stop anonymous use of the internet or anonymous comments being made on the internet. There are many, many legitimate reasons why a user may want to be anonymous when making comments, and it is the government's position that that is appropriate, provided that the content and substance of those comments, in this context, don't become defamatory and, in other contexts, don't become the other sorts of online harms that the government would draw the line against.171

Arguments against anonymity/pseudonymity

3.138
In contrast to these views, some submitters expressed concern that anonymity or pseudonymity encourages or amplifies harm. The majority of witnesses who were against the practice of online anonymity or pseudonymity considered the matter primarily from the perspective of preventing CSAM or child exploitation material in general. Ms Sonya Ryan, CEO and Founder of the CRF stated:
Essentially, it provides a veil for criminals to hide behind. This provides an environment to be able to exploit children in various different ways. When it comes to our young people, I think that the government should be doing everything in its power to lift that veil, particularly for law enforcement agencies. Of course that would need to be regulated, but, when it comes to the safety of a young person, all measures should be made available and less anonymity given for users in the online space that are accessing children. We are seeing, especially with the proposed end-to-end encryption, more focus on privacy than there is often on the protection and safety of young people.172
3.139
Following on from this, Ms Ryan was also supportive of the implementation of age verification to monitor children’s access to online services, as she believed that this woud also enable identity verification to assist with the ‘unmasking’ of anonymous perpetrators.173
3.140
More broadly, other witnesses argued that online anonymity encourages abuse or harassment. For instance, Ms Erin Molan noted that the power online trolls have comes from their anonymity:
But the personal impact of this on people—and we've seen people take their lives. We've seen kids try to take their lives. We've seen so many lives ruined by this kind of behaviour. It's not weak and it's not for the vulnerable. It's not for the people who aren't resilient. Strong people get absolutely annihilated and torn to shreds by this behaviour, by anonymous trolls. You take away their anonymity and you take away their power and, all of a sudden, it's a level playing field again. And that's what it needs to be.174
3.141
While acknowledging that online anonymity plays important roles, particularly for whistleblowers, Dolly’s Dream contrasted that with the problems it can lead to, and the need to respond to those in a nuanced way:
But when it's used purely for the purpose of trolling somebody, bullying somebody, abusing somebody—all of the things that we know are the worst of social media—there are some of the issues on which we hope platforms, service providers and regulators are able to at least have a meaningful conversation and work with others.175

Privacy protection and data storage practices

3.142
Privacy considerations, including how users’ data is stored and used by social media and other digital companies, were highlighted as an issue by witnesses. It was put to the Committee that digital platforms do not provide sufficient transparency in advising how they treat their users’ data and protect their privacy, particularly for vulnerable users such as children. 176
3.143
Professor Amanda Third asserted that children are conscious of the digital industry’s possession of their data, but are encouraged by adults’ behaviour to agree to lengthy terms and conditions without fully understanding them. Professor Third stated:
Children, like adults, have become attuned to the ways their data is collected, stored, used and shared on. For children here in Australia but also internationally, the big concern is that they really don't understand how their data is collected. They know it's happening, but they don't know what's happening to it and when and how that data can be used. They feel like they have to sign very complicated terms and conditions when they sign up to use social media and other platforms, and they don't always understand the deals they're making. This sends a very destructive message because, on the one hand, we teach children through a whole range of programs—for example, around healthy relationships—that consent is hugely important to the proper functioning of the world, but, on the other hand, when it comes to their technology practices we more or less tell them that it doesn't matter whether or not you understand the terms and conditions you're signing up to; check the box and off you go.177
3.144
Meta argued that it had developed strong privacy and data management practices, consulting with experts, government and the broader sector to create effective privacy tools.178 It stated that its Privacy Review process enables Meta to build every new product or feature it creates with privacy considerations as a paramount consideration, and provides customers with ‘choices and transparency’.179 When a Privacy Review is conducted, Meta explained that a broad range of teams examine a product to determine the strength of its privacy protections:
During this review, cross-functional teams evaluate privacy risks associated with a project, and determine if there are any changes that need to happen before launch to control for those risks. This review considers whether a project meets our privacy expectations which include: purpose limitation, data minimisation, data retention, external data misuse, transparency and control, data access and management, fairness and accountability.180
3.145
In relation to data privacy in particular, Meta provided the following summary of its approaches:
We take a multi-faceted approach to data security, focussing on areas as diverse as penetration testing, spam prevention, disrupting operations run by adversaries, data protection, and taking legal steps to respond to cyber attacks. We’ve invested significantly to ensure our network infrastructure is strong, secure and capable of enforcing strong encryption for billions of users. We use a combination of expert teams and automated technology to detect potential abuses of our services.181
3.146
Further, Meta stated that it follows five principles in relation to data security matters: using encryption and security to protect user data; refusing to provide ‘back door’ government access; ensuring robust policies in relation to government requests for user data; refusing compliance where Meta feels a government request is ‘deficient’; and providing transparency by way of providing notifications to users in relation to requests for the information prior to any disclosure.182
3.147
Google has developed a privacy protection program across its platforms, particularly in relation to children. For users under the age of 18 years, Google has implemented default privacy settings such as:
Disabled functionality for ads personalisation;
Upload settings set to the most private available, which restricts content to be seen only by the user and whoever they choose; and
Educational material and guidance for parents and children.183
3.148
Snap stated that it has implemented Privacy by Design principles in the build of its app, demonstrated in its lack of shared newsfeeds and promoting communication between two people or small groups only.184
3.149
Twitter stated that it has a Privacy Centre which provides users with more information about its privacy practices, which provides information in relation to Twitter’s Privacy Policy and guidance for users to improve privacy settings.185

Age verification technology

3.150
Age verification (or age assurance) tools have been increasingly utilised by technology and social media platforms to prevent children accessing inappropriate material. Meta emphasised that age verification ‘is not as easy as it sounds’, noting that it is a complex challenge to the entire digital industry to adequately understand a user’s age.186
3.151
Google Australia explained that it uses a variety of means to determine the age of users before the platform restricts age-inappropriate content. It stated that ‘[t]hese measures can be supplemented with additional steps that ensure that children interacting with services are being treated appropriately while also respecting data minimisation requirements’.187
3.152
Meta took a slightly divergent approach, indicating a recognition that some users were too young to be on its service:
Meta takes a multi-layered approach to understanding someone’s age - we want to keep people who are too young off of Facebook and Instagram, and make sure that those who are old enough receive the appropriate experience for their age.188
3.153
As an initial step, Meta requires the provision of a date of birth when registering for a new account (known as an age screen). It will refuse access for those under the age of 13 years, and places restrictions on attempts to enter different birthdates into the age screen to circumvent the possibility of underage users attempting to ‘game’ the system.189 Recognising that some users may lie about their age to gain access, Meta has invested in AI to understand a user’s real age. Signals used by Meta to detect a person’s true age include examining posts wishing someone a happy birthday and the age written in comments, and linked accounts with different ages associated with them.190
3.154
If Meta identifies accounts that appear to be owned by a person under 13 years, and that person cannot provide evidence of their age, the account is deleted. Meta stated that this policy led to the removal of over 2.6 million accounts on Facebook and 850,000 accounts on Instagram between July and September 2021 due to minimum age requirements not being met. 191
3.155
Additional controls for young people on their platforms include:
Encouragement of private accounts for young people with existing public accounts on Instagram;
Controls on advertisements targeting young people under 18 years, including restricting advertisements based on interests, activities on the platform, or activity on other apps or websites;192
Default account provisions for young people, including privacy settings set at high levels;
Warnings for sensitive content that are permitted on the platforms for ‘public interest, newsworthiness or free expression value, that may be disturbing or sensitive for some users’193 (e.g. violent or graphic content that provides evidence of human rights violations);
Restrictions on adults sending private messages to young people, including a safety notice function being sent to users;
Developing technology to make it difficult for adults to find or follow young people, specifically by identifying accounts which are demonstrating suspicious behaviour (e.g. being reported or blocked by a young person) and not allowing young people’s accounts to display.194

Views of age verification

3.156
Many witnesses to the inquiry expressed concern that age verification technology would negatively impact on users’ privacy. DRW argued that the impact on privacy was at odds with reducing harm online. It stated:
Most forms of age verification require the provision of additional personal information in order to be effective. Incentivising companies and government agencies to collect, use and store additional personal information in order to conduct age verification creates additional privacy and security risk, which in-turn can exacerbate online harms.195
3.157
In outlining the ways in which Google conducts age assurance processes, Google Australia expressed its view that ‘hard identifiers’ or third-party verification methods should only be used for ‘content and services that are particularly risky for children as they have a detrimental impact on all users’ ability to access content and services’.196 Google Australia also suggested that stricter forms of verification also may impact vulnerable groups who may be unable to access the required forms of identification such as credit cards.197 It expressed the view that:
No age verification mechanism is 100% accurate, and the more accurate the mechanisms the more intrusive it likely is. Ensuring that we implement age-appropriate safeguards, while at the same time ensuring that our services remain private and accessible remains a complex challenge. It’s a problem that we are committed to solving, but no one company will be able to address this alone. Age assurance models should follow a risk-based assessment and be implemented in a proportionate way, balancing the need for accuracy with the risk of limiting rightful access to information and impact on users’ privacy.198
3.158
The Daniel Morcombe Foundation noted that while many social media companies have put in place age restrictions for their users, these limits were easily overcome by parents or older siblings signing children up for accounts themselves.199
3.159
This topic was considered in an inquiry by the House Standing Committee on Social Policy and Legal Affairs, which recommended the development and implementation of a mandatory age verification regime for online pornographic material, to be undertaken by the eSafety Commissioner.200 It further recommended that the National Consumer Protection Framework for Online Wagering introduce a requirement that ‘customers are not able to use an online wagering service prior to verification of their age as 18 years or over’.201 The Australian Government was supportive of a number of the recommendations made, and work has commenced to address the concerns raised.202

Parental control features

3.160
Many online products such as social media platforms utilise parental control technology as part of their services to enable parents to monitor and control what content their children view and interact with.
3.161
Multiple witnesses suggested these controls were problematic in their approach to safety and were thus questionably achieving their goals of protecting children online. Some witnesses pointed out that these systems assume that a parent has the technological understanding to effectively use the controls.
3.162
Family Zone highlighted that the current approach tends to shift responsibility of children’s safety online onto parents, and encourages blame to be placed on parents in the event that children are harmed:
Too frequently the exposure of children to harm online is blamed on parents. There appears to be a popular but entirely fallacious view that “parents don’t care” or “parents need to do more”.
In our experience this is categorically not true and anyone who has attempted to navigate the pitfalls, complexity and challenges of keeping kids safe online would agree.203
3.163
Professor Amanda Third suggested that the underlying assumption of parental control technology is that children have parents who can assist in using technology, which may not be the case. She stated that children who do not have the benefit of having parents who are able and willing to teach them about technological use and monitor their activities, may instead seek guidance in other places.204
3.164
Similarly, Reset Australia noted that:
While parental control tools are important, we do not believe that placing responsibility onto parents to manage online safety is the most effective solution. Platforms should be developing systems that are safe for children in the first instance.205
3.165
Reset Australia drew the analogy with industrial hazard reduction, where the focus is on eliminating hazards rather than simply placing barriers between users and the hazard.206
3.166
This point was supported by Dr Michael Salter, who suggested that allowing platforms to place the responsibility on parents to control their children’s online access and habits was problematic in two key ways.
3.167
Firstly, Dr Salter stated that not all children have protective parents. He explained that this could be due to a range of reasons, including that parents ‘might be working three jobs, their parents might be experiencing substance abuse issues or mental health issues or … incapacitated for another reason’.207 He also noted that some children may not be cared for by their parents, such as in being in residential care or out-of-home care.208 Secondly, Dr Salter noted that a significant proportion of CSAM content is made within the home by parents and family figures, which indicates that these carers may not have their child’s best interests at heart.209
3.168
The Isolated Children’s Parents’ Association of Australia further highlighted the particular challenges faced by children from remote parts of the country, many of whom are educated in boarding schools and therefore do not have parents in proximity to help with and monitor their technology usage.210 Similarly to Dr Salter and Professor Third, they highlighted that many children do not necessarily have their parents present to appropriately guide them in using online services.
3.169
Responding to these concerns, Meta stated that parental controls are available for some of its services. Messenger Kids, launched in 2020, enables parents to set privacy and security controls while enabling young people to utilise the product.211 Meta intends to bring in parental controls for Instagram, including the capacity to monitor and set limits on their children’s usage and enabling young people to notify their parents if they report someone.212

Limited reporting requirements

3.170
Multiple witnesses voiced concerns regarding the lack of transparency and mandatory reporting requirements from social media companies in relation to online harms that exist on their platforms.
3.171
Dr Michael Salter stated that, due to the business model that social media companies operate in, digital platforms are generally disincentivised to provide transparency in relation to online harm as this may scare potential investors or advertisers.213 While citing Meta as a positive example of a company which openly reports about the detection and action taken against harms such as CSAM, other digital companies provide no oversight:
In comparison [to Meta], we see almost no notifications from Apple year on year. But there is no question that significant amounts of CSAM are being created and shared via iPhones and also through various file storage and Cloud storage facilities.214
3.172
Ms Ryan stated that the CRF had sought data from social media services, but had been consistently refused on the basis of user privacy.215 She argued that any platform hosting children should be transparent with data in order to better protect underage users, and that all online entities should provide data to the ‘appropriate agencies’ if it assists in protecting children online.216
3.173
In response, Google Australia stated that since 2010 it has published transparency reports to provide the public with information on government requests for data. It has expanded these reports by including YouTube data, information on how Google is addressing CSAM, and a Community Guidelines Enforcement Report which details the platform’s efforts to reduce harmful content.217
3.174
In addition to providing transparency about its operations, Meta also stated that it welcomed regulation of harmful content.218 Meta argued that it had ‘led the industry in developing transparency reports about content enforcement’, especially regarding the prevalence of users viewing content in violation of Meta’s policies, which was divided by the estimated rate of content views in total across the particular platform (e.g. Facebook or Instagram).219
3.175
Meta also publishes Community Standards Enforcement Reports every quarter, which provide information in relation to the amount of content Meta removes or otherwise actions, the amount of content identified and removed prior to its being viewed.220 Other transparency reports published by Meta include reports on widely viewed content, content restrictions, government and law enforcement cooperation, and other areas.221 Meta’s Oversight Board also publishes Transparency Reports which provide information about its deliberations and its decisions in relation to cases.222
3.176
Similarly, Twitter publishes biannual Transparency Reports detailing information such as enforcement of its Rules and foreign interference disruption. Twitter also has open access to its Transparency Centre which provides data in relation to issues such as information request, removal requests, copyright notices, and others.223
3.177
TikTok issues Transparency Reports on a quarterly basis in relation to harmful material and removal of content.224 Snap publishes transparency reports on a biannual basis, and is currently the only major social media platform to break down statistics by country.225

Committee comment

3.178
In an ideal online world, technology would act as a neutral means of connecting people around the world, enabling safe and secure communication. Given the findings contained in this chapter, however, it is clear that some elements of digital platforms have the capacity to amplify harm and cause further distress to victims.
3.179
The Committee notes the work that has been conducted by the social media companies particularly in attempting to promote safety for users on their platforms. It also commends the continuing efforts of technology companies to provide safety for users, particularly vulnerable users such as children.
3.180
Nonetheless, the Committee is of the view that there is much more for industry to do to assure governments and the public that platforms are taking the matter seriously and that Australians can trust them to protect their safety online.

System design

3.181
The Committee agrees with the views of witnesses that social media platforms and digital products at large have generally not been designed with users’ safety as a priority, particularly for vulnerable groups.
3.182
The Committee is particularly mindful of Dr Michael Salter’s comments in relation to this matter, where he compared the online industry to the childcare sector in the 1970s and 1980s, which developed exponentially and rapidly due to increased demand for the sector, but did not adequately consider the risks to children and as a result experienced significant issues in relation to child sexual abuse.226 The online industry is in a similar space of development, where safety principles are now being accepted as necessary to ensure the safety of what is now an essential service for many Australians but are yet to be substantially implemented by providers.
3.183
The Committee commends the work of the eSafety Commissioner in establishing the Safety by Design Principles, and looks forward to seeing the implementation of these principles as the Online Safety Act comes into effect. The Committee is pleased to note that the social media and digital platforms appear to have been cooperative and constructive during consultations for the design of the Safety by Design Principles, and hopes to see their commitment to online safety turning into effective action on their platforms.
3.184
Having said that, while the Committee is cognisant of the work being conducted by social media companies in taking action on harm in relation to the Safety by Design principles, it notes that a number of social media companies and digital platforms have been in the industry for in excess of ten years, and that the Safety by Design principles will need to be retrospectively implemented. The Committee is eager to see how such retrospective application will be carried out, and believes that this topic would be best examined by the upcoming review of the operation of the Online Safety Act.

Recommendation 9

3.185
The Committee recommends that future reviews of the operation of the Online Safety Act 2021 take into consideration the implementation of the Safety by Design Principles on major digital platforms, including social media services and long-standing platforms which require retrospective application of the Safety by Design Principles.

Identifying and removing harmful content

3.186
The Committee is concerned by the evidence provided regarding the harms that are present on digital platforms, as outlined in Chapter 2. These forms of harm are pernicious to those who experience them, resulting in extremely real and serious suffering, which in many cases has impacts which affect people’s entire lives and are long-lasting.
3.187
Social media and other digital companies appear to be addressing these harms through a mixture of proactive detection technology and reports from users once content is published on the platform. While the Committee notes that social media companies are improving at their detecting of harmful content before it is made public, it remains concerned that these detection models are inherently reliant on responding to harmful content after it has been made public or has occurred, and do not go sufficiently far enough in preventing harm.
3.188
Another key issue raised by witnesses was the inadequacy of the industry’s policing or moderation of their own standards. Many examples were given of abusive comments that met the threshold to involve either the police or the eSafety Commissioner, but the comments were not taken down by the platforms themselves.
3.189
Platforms need to properly monitor and uphold their own standards. There should be clear and direct consequences for breaches, including (but not limited to):
Banning users from social media platforms and other digital spaces; and
The use of pop-up warnings for content that an algorithm identifies as potentially breaching terms of service.
3.190
Further, the Committee objects to the trend of many social media platforms that places too much responsibility for identifying harmful content on user reports. Everyday users are not technically or psychologically equipped to be able to identify harm at large and withstand the trauma that can potentially result. Social media users should be able to trust that their interactions online will be safe and free from harmful content which could potentially cause extreme and long-term effects on viewers.
3.191
The detection of CSAM content is of particular interest to the Committee. The inconsistent approaches of social media platforms and digital platforms in detecting and removing CSAM content indicates that further work is required to ensure that the industry does not inadvertently protect and facilitate perpetrators. Accordingly, it would be appropriate for digital platforms to investigate improved methods of detection in relation to CSAM material, in consultation with both Australian and international child exploitation authorities.
3.192
The Committee notes the comments by a number of witnesses in relation to the dangers posed by a mass uptake of E2EE by social media and digital companies. It is not clear that the risks posed by the potential shielding of CSAM (in addition to other forms of harm) over E2EE services are outweighed by the benefits offered by services to provide privacy to users. While these services can be used by vulnerable groups, such as those experiencing family violence, the Committee also considers that one vulnerable group’s rights should not negate another group’s rights to protection from harm. Any potential widespread uptake of E2EE should be carefully considered and – if necessary – regulated to ensure that the appropriate balance between harm reduction and privacy protection is maintained.
3.193
The Committee’s view is that the challenges raised by E2EE in relation to the implementation by social media platforms are real and of concern. Without corresponding safeguards, E2EE can negatively impact the ability of law enforcement agencies to identify online predators.
3.194
Additionally, it became increasingly clear during the course of the inquiry that while social media companies may have strong policies and terms of use in relation to harmful content on their services, they experience significant challenges in upholding these standards. When provided examples of abusive comments that met the threshold for eSafety’s adult online abuse framework and also for police investigation, a number of social media platforms did not remove these comments or consider them to have breached the relevant terms of use.
3.195
The community standards that platforms apply to their users do not necessarily match up with general community expectations. Social media companies have a role in shaping societal standards regarding what is and is not appropriate behaviour online or in-person. The Committee is concerned that community standards are being shaped and influenced by social media platforms, and without regulatory intervention that online abuse will continue to flourish.
3.196
Accordingly, the Committee believes that the eSafety Commissioner should be given additional powers to compel social media companies to provide evidence for the enforcement of their terms of service and rules. It further is of the view that the eSafety Commissioner should be given greater powers to disrupt volumetric attacks of individual users.

Recommendation 10

3.197
The Committee recommends that the Department of Infrastructure, Transport, Regional Development and Communications, in conjunction with the eSafety Commissioner and the Department of Home Affairs, examine the need for potential regulation of end-to-end encryption technology in the context of harm prevention.

Recommendation 11

3.198
The Committee recommends that the eSafety Commissioner, as part of the drafting of new industry codes and implementation of the Basic Online Safety Expectations:
Examine the extent to which social media services adequately enforce their terms of service and community standards policies, including the efficacy and adequacy of actions against users who breach terms of service or community standards policies;
Examine the potential of implementing a requirement for social media services to effectively enforce their terms of service and community standards policies (including clear penalties or repercussions for breaches) as part of legislative frameworks governing social media platforms, with penalties for non-compliance; and
Examine whether volumetric attacks may be mitigated by requiring social media platforms to maintain policies that prevent this type of abuse and that require platforms to report to the eSafety Commissioner on their operation.

Recommendation 12

3.199
The Committee recommends that the eSafety Commissioner examine the extent to which social media companies actively apply different standards to victims of abuse depending on whether the victim is a public figure or requires a social media presence in the course of their employment, and provides options for a regulatory solution that could include additions to the Basic Online Safety Expectations.

Algorithms

3.200
While algorithms play a key role in the basic function of multiple types of online services, it is clear that they have the potential to enormously accentuate online harm. This is heightened by the evidence from witnesses suggesting that many social media platforms are opaque in explaining how their algorithms work.
3.201
Algorithms require further investigation to determine the types and scale of harm they can cause, how they operate in different digital mediums, and how best to moderate and regulate them.
3.202
Notwithstanding this point, evidence provided suggested that many social media companies do not provide publicly available detail in relation to how their algorithms work and whether the platforms are doing anything to address potential harms caused through their algorithms. While some of this material may be considered as commercial-in-confidence, more transparency is required of social media companies to demonstrate that these concerns are being addressed.
3.203
A statutory requirement ensuring that digital platforms and social media companies provide details regarding how they are working to minimise harm caused by algorithms would be an appropriate form in which to provide this detail, which could be designed to ensure that it would not impact on the platforms’ commercial-in-confidence details. The review of the Online Safety Act should consider whether such a requirement could be implemented and the appropriate form for it to take.

Recommendation 13

3.204
The Committee recommends that the eSafety Commissioner, in conjunction with the Department of Infrastructure, Transport, Regional Development and Communications and the Department of Home Affairs and other technical experts as necessary, conduct a review of the use of algorithms in digital platforms, examining:
How algorithms operate on a variety of digital platforms and services;
The types of harm and scale of harm that can be caused as a result of algorithm use;
The transparency levels of platforms’ content algorithms;
The form in which regulation should take (if any); and
A roadmap for Australian Government entities to build skills, expertise and methods for the next generation of technological regulation in order to develop a blueprint for the regulation of Artificial Intelligence and algorithms in relation to user and online safety, including an assessment of current capacities and resources.

Recommendation 14

3.205
The Committee recommends that the eSafety Commissioner require social media and other digital platforms to report on the use of algorithms, detailing evidence of harm reduction tools and techniques to address online harm caused by algorithms. This could be achieved through the mechanisms provided by the Basic Online Safety Expectations framework and Safety By Design assessment tools, with the report being provided to the Australian Government to assist with further public policy formulation.

Algorithmic transparency

3.206
The Committee was told repeatedly by the platforms during the course of this inquiry that their practices for dealing with online harms have improved.
3.207
However, these claims were difficult to assess in the absence of consistent, specific and auditable transparency frameworks. The Committee is of the view that this makes measurement on such improvements over time very difficult.
3.208
As Ms Frances Haugen noted:
(The) inability to see into Facebook's actual systems and confirm how they work, as communicated, is like the Department of Transportation regulating cars by only watching them drive down the highway.227
3.209
Former CEO of Crowdtangle and a former employee of Meta Mr Brad Silverman, outlined proposals to empower the United States Federal Trade Commission to enact tiers of transparency on social media platforms. This proposed legislation would utilise three transparency mechanisms:
The first is a mechanism for allowing in-depth research on very sensitive datasets that would be facilitated by an agency that sits under the National Science Foundation. The idea would be that academics and researchers would propose doing research on certain types of datasets. The NSF would function as a mediator…
The second piece is what's called a safe harbour, which would allow public interest and news gathering related use cases for automated collection of data off platforms. That is colloquially oftentimes referred to as scraping, but it would certify the rights of certain entities to do automatic collection off public datasets…
The third one is…essentially a handful of different other mechanisms [such as] a set of libraries designed to provide access to datasets that the public, writ large, could look at.228
3.210
Ms Frances Haugen made reference to the ‘floor for transparency’ and said that:
Any system that we put in place needs to be dynamic. It needs to be a thing where the threats that we are aware of with Facebook will be different six months or a year from now. ... We need a system that is not about: 'Here are the 10 … things each day or each week or each year.' We need systems that are dynamic and respond to emergent concerns.229
3.211
Developing an effective transparency framework for the social media platforms is a complex challenge for policy makers and regulators, but an important one to address. In many senses it is the policy intervention that the success of many other interventions rests upon.
3.212
The Committee considers that requiring social media platforms to be transparent is a complex challenge for policy makers and regulators but it is an important one.

Recommendation 15

3.213
The Committee recommends that, subject to Recommendation 19, the proposed Digital Safety Review make recommendations to the Australian Government on potential proposals for mandating platform transparency.

Protection of children

3.214
The Committee acknowledges the work of the social media platforms and digital services to protect children while on their sites. Having said that, it is clear from evidence that children and their safety have not been considered as a primary focus when designing products, nor in managing users’ experiences. Regardless of the intentions of social media platforms in attempting to keep underage users off their services, the reality is that children will inevitably access these services, and some to their detriment. Social media platforms have a fundamental moral duty to ensure that children are kept safe on their platform, regardless of age restrictions.
3.215
Children have the right to their privacy for their early years. This, however, is not a right being strictly protected by social media platforms at present. Strong protections should be considered as a paramount requirement for children, and privacy settings on social media accounts should be set as high as possible for those under 18 years of age
3.216
Further, the broader technological industry has a role to play in the battle to ensure children remain safe online. Additional requirements on technology companies, particularly those who design and produce technological devices such as smartphones, are required to ensure that parents are able to effectively monitor and control their children’s use of social media services, and intervene before harmful situations arise.
3.217
The Basic Online Safety Expectation Determination includes that industry should make sure the default privacy and safety settings of services targeted at or used by children are robust and set to the most restrictive level. Notwithstanding this, the Committee considers there is an opportunity to build on this determination by making it a mandatory requirement for all digital services with a social networking component to set default privacy and safety settings at their highest form for all users under 18 years of age.

Recommendation 16

3.218
The Committee recommends the implementation of a mandatory requirement for all digital services with a social networking component to set default privacy and safety settings at their highest form for all users under 18 (eighteen) years of age.

Recommendation 17

3.219
The Committee recommends the implementation of a mandatory requirement for all technology manufacturers and providers to ensure all digital devices sold contain optional parental control functionalities.

  • 1
    Meta, Submission 49, p. 7.
  • 2
    Meta, Submission 49, p. 7.
  • 3
    Meta, Submission 49, p. 8.
  • 4
    Meta, Submission 49, p. 10.
  • 5
    Meta, Submission 49, p. 3.
  • 6
    Meta, Submission 49, p. 3.
  • 7
    Meta, Submission 49, pp 11-12.
  • 8
    Meta, Submission 49, p. 12.
  • 9
    Meta, Submission 49, p. 2.
  • 10
    Meta, Submission 49, p. 15.
  • 11
    Meta, Submission 49, pp 21-26.
  • 12
    Google Australia, Submission 30, pp 3-4.
  • 13
    Google Australia, Submission 30, pp 4-5.
  • 14
    YouTube, Child safety policy, available at: https://support.google.com/youtube/answer/2801999 (accessed 4 February 2022).
  • 15
    Google Australia, Submission 30, p. 5.
  • 16
    Google Australia, Submission 30, p. 8.
  • 17
    Ms Kathleen Reen, Senior Director of Public Policy, Asia-Pacific, Twitter, Committee Hansard, 21 January 2022, p. 17.
  • 18
    Twitter, Submission 50, p. 5.
  • 19
    Twitter, Submission 50, p. 8.
  • 20
    Twitter, Submission 50, p. 9.
  • 21
    Twitter, Submission 50, p. 10.
  • 22
    Twitter, Submission 50, p. 5.
  • 23
    Twitter, Submission 50, pp 12-13.
  • 24
    Snap Inc., Submission 16, p. 1.
  • 25
    Snap Inc., Submission 16, p. 1.
  • 26
    Snap Inc., Submission 16, p. 2.
  • 27
    Snap Inc., Submission 16, pp 1-5.
  • 28
    Snap Inc., Submission 16, p. 2.
  • 29
    TikTok Australia, Submission 57, p. 1.
  • 30
    TikTok Australia, Submission 57, pp 2-5.
  • 31
    Dr Hany Farid, Committee Hansard, 28 January 2022, p. 5.
  • 32
    Mr Peter Lewis, Director, Centre for Responsible Technology, Committee Hansard, 28 January 2022, p. 13.
  • 33
    Mr Peter Lewis, Centre for Responsible Technology, Committee Hansard, 28 January 2022, p. 12.
  • 34
    Dr Michael Salter, Committee Hansard, 18 January 2022, p. 9.
  • 35
    Ms Carla Wilshire, Chair, Centre for Digital Wellbeing (CDW), Committee Hansard, 21 January 2022, p. 28.
  • 36
    Ms Carla Wilshire, CDW, Committee Hansard, 21 January 2022, p. 28.
  • 37
    Ms Frances Haugen, Committee Hansard, 3 February 2022, p. 7.
  • 38
    Dr Hany Farid, Committee Hansard, 28 January 2022, p. 5.
  • 39
    Ms Erin Molan, Committee Hansard, 18 January 2022, p. 4.
  • 40
    Ms Lucinda Longcroft, Director, Government Affairs and Public Policy, Google Australia and New Zealand, Committee Hansard, 20 January 2022, p. 11.
  • 41
    Ms Mia Garlick, Regional Director for Policy, Australia, New Zealand and the Pacific Islands, Meta, Committee Hansard, 20 January 2022, pp 23-24.
  • 42
    Scarlet Alliance, Australian Sex Workers Association, Submission 85, p. 7.
  • 43
    Dr Kate Hall, Head of Mental Health and Wellbeing, Australian Football League (AFL), Committee Hansard, 1 February 2022, p. 15.
  • 44
    Ms Sunita Bose, Managing Director, Digital Industry Group Inc. (DIGI), Committee Hansard, 20 January 2022, p. 37.
  • 45
    Ms Sunita Bose, DIGI, Committee Hansard, 20 January 2022, pp 37-38.
  • 46
    Senate Legal and Constitutional Affairs References Committee, Adequacy of existing offences in the Commonwealth Criminal Code and of state and territory criminal laws to capture cyberbullying, March 2018, p. 63 (Recommendation 8).
  • 47
    Australian Government, Government response to the Senate Legal and Constitutional Affairs References Committee's report for its inquiry into the adequacy of existing offences in the Commonwealth Criminal Code and of state and territory criminal laws to capture cyberbullying (Government Response), April 2021, available at: https://www.aph.gov.au/DocumentStore.ashx?id=e1e15f74-60db-4894-960b-4e89ddcf9834 (accessed 10 January 2022) p. 19.
  • 48
    Australian Government, Government Response, April 2021, available at: https://www.aph.gov.au/DocumentStore.ashx?id=e1e15f74-60db-4894-960b-4e89ddcf9834 (accessed 10 January 2022) p. 19.
  • 49
    Australian Government, Government Response, April 2021, available at: https://www.aph.gov.au/DocumentStore.ashx?id=e1e15f74-60db-4894-960b-4e89ddcf9834 (accessed 10 January 2022) p. 19.
  • 50
    Dr Michael Salter, Committee Hansard, 18 January 2022, p. 9.
  • 51
    Dr Michael Salter, Committee Hansard, 18 January 2022, p. 11.
  • 52
    Dr Michael Salter, Committee Hansard, 18 January 2022, p. 11.
  • 53
    Department of Home Affairs (Home Affairs), Voluntary Principles to Counter Online Child Sexual Exploitation and Abuse, available at: https://www.homeaffairs.gov.au/news-subsite/files/voluntary-principles-counter-online-child-sexual-exploitation-abuse.pdf (accessed 3 March 2022).
  • 54
    Meta, Submission 49.1, p. 20.
  • 55
    Home Affairs, Submission 40, p. 9.
  • 56
    Google Australia, Submission 30, p. 7.
  • 57
    Google Australia, Submission 30, p. 17.
  • 58
    Google Australia, Submission 30, p. 17.
  • 59
    Google Australia, Submission 30, p. 8.
  • 60
    Snap Inc., Submission 16, p. 3.
  • 61
    Meta Transparency Centre, Community Standards Enforcement Report – Child Endangerment: Nudity and Physical Abuse and Sexual Exploitation, available at: https://transparency.fb.com/data/community-standards-enforcement/child-nudity-and-sexual-exploitation/facebook/ (accessed 3 March 2022).
  • 62
    Facebook, Submission 24, Joint Standing Committee on Law Enforcement inquiry into law enforcement capabilities in relation to child exploitation, available at: https://www.aph.gov.au/DocumentStore.ashx?id=25ed9734-24c6-429d-8f3d-64b5a6eb7b13&subId=712353 (accessed 3 March 2022), p. 2.
  • 63
    Meta, Submission 49.1, p. 20.
  • 64
    Ms Tayla Harris, Committee Hansard, 1 February 2022, p. 20.
  • 65
    Ms Kara Hinesley, Director of Public Policy, Australia and New Zealand, Twitter, Committee Hansard, 21 January 2022, pp 11-12.
  • 66
    Ms Kara Hinesley, Twitter, Committee Hansard, 21 January 2022, p. 16.
  • 67
    Ms Kara Hinesley, Twitter, Committee Hansard, 21 January 2022, p. 17.
  • 68
    Ms Kara Hinesley, Twitter, Committee Hansard, 21 January 2022, p. 17.
  • 69
    Ms Kara Hinesley, Twitter, Committee Hansard, 21 January 2022, p. 17.
  • 70
    Ms Kara Hinesley, Twitter, Committee Hansard, 21 January 2022, p. 17.
  • 71
    Meta, Submission 49, p. 16.
  • 72
    Meta, Submission 49, p. 16.
  • 73
    Meta, Submission 49, p. 8.
  • 74
    Meta, Submission 49, p. 8.
  • 75
    Meta, Submission 49, p. 16.
  • 76
    Meta, Submission 49, p. 16.
  • 77
    Google Australia, Submission 30, p. 6.
  • 78
    Snap Inc., Submission 16, p. 3.
  • 79
    Snap Inc., Submission 16, p. 3.
  • 80
    Ms Kara Hinesley, Twitter, Committee Hansard, 21 January 2022, p. 18.
  • 81
    Ms Erin Molan, Committee Hansard, 18 January 2022, p. 2.
  • 82
    Giggle, Submission 43, p. 2.
  • 83
    Ms Lucinda Longcroft, Director, Government Affairs and Public Policy, Google Australia and New Zealand, Committee Hansard, 20 January 2022, p. 7.
  • 84
    Ms Kara Hinesley, Director of Public Policy, Australia and New Zealand, Twitter, Committee Hansard, 21 January 2022, p. 13.
  • 85
    Ms Erin Molan, Committee Hansard, 18 January 2022, p. 2.
  • 86
    Committee Hansard, 21 January 2022, p. 14.
  • 87
    Ms Tayla Harris, Committee Hansard, 1 February 2022, p. 19.
  • 88
    Ms Mia Garlick, Regional Director for Policy, Australia, New Zealand and the Pacific Islands, Meta, Committee Hansard, 20 January 2022, p. 20.
  • 89
    Giggle, Submission 43, pp 1-2..
  • 90
    Senator Claire Chandler, Submission 69, p. 1.
  • 91
    Ms Erin Molan, Committee Hansard, 18 January 2022, pp 3-4.
  • 92
    Mr Chad Wingard, AFL, Committee Hansard, 1 February 2022, p. 24.
  • 93
    Ms Christine Morgan, Chief Executive Officer and Prime Minister’s National Suicide Prevention Adviser, National Mental Health Commission (NMHC), Committee Hansard, 21 January 2022, p. 7.
  • 94
    Senator Claire Chandler, Submission 69, p. 2.
  • 95
    Mr Matt Berriman, Chair, Mental Health Australia, Committee Hansard, 21 January 2022, p. 24.
  • 96
    Ms Nicolle Flint MP, Submission 70, p. 3.
  • 97
    Ms Kara Hinesley, Twitter, Committee Hansard, 21 January 2022, p. 12
  • 98
    Ms Kara Hinesley, Twitter, Committee Hansard, 21 January 2022, p. 12.
  • 99
    Twitter, Submission 50, p. 8.
  • 100
    Meta, Submission 49, p. 43.
  • 101
    Meta, Submission 49, p. 43.
  • 102
    Meta, Submission 49, p. 44.
  • 103
    Meta, Submission 49, p. 45.
  • 104
    Meta, Submission 29, p. 44.
  • 105
    Ms Frances Haugen, Committee Hansard, 3 February 2022, p. 2.
  • 106
    Ms Frances Haugen, Committee Hansard, 3 February 2022, pp 2-3.
  • 107
    Ms Frances Haugen, Committee Hansard, 3 February 2022, p. 3.
  • 108
    Meta, Submission 49, p. 57.
  • 109
    Department of Home Affairs (Home Affairs), Submission 40, p. 4.
  • 110
    Home Affairs, Submission 40, p. 4.
  • 111
    Google Australia, Submission 30, p. 15.
  • 112
    Google Australia, Submission 30, p. 15.
  • 113
    Meta, Submission 49, p. 57.
  • 114
    Home Affairs, Submission 40, p. 4.
  • 115
    Home Affairs, Submission 40, p. 4.
  • 116
    Reset Australia, Submission 12, pp 20-21.
  • 117
    Ms Carla Wilshire, Chair, Centre for Digital Wellbeing (CDW), Committee Hansard, 21 January 2022, p. 28.
  • 118
    Eating Disorders Families Australia, Submission 37, p. 3.
  • 119
    Eating Disorders Families Australia, Submission 37, p. 4.
  • 120
    Home Affairs, Submission 40, p. 4.
  • 121
    The Australia Institute, Submission 6, p. 6.
  • 122
    Ms Frances Haugen, Committee Hansard, 3 February 2022, p. 3.
  • 123
    The Australia Institute, Submission 6, p. 5.
  • 124
    The Australia Institute, Submission 6, p. 5.
  • 125
    The Australia Institute, Submission 6, p. 5.
  • 126
    Dr Michael Salter, Committee Hansard, 18 January 2022, p. 11.
  • 127
    Dr Michael Salter, Committee Hansard, 18 January 2022, p. 11.
  • 128
    Dr Michael Salter, Committee Hansard, 18 January 2022, p. 11.
  • 129
    Professor Amanda Third, Professorial Research Fellow, Institute for Culture and Society, Western Sydney University; Co-Director, Young and Resilient Research Centre, Western Sydney University (Young and Resilient Research Centre), Committee Hansard, 21 December 2021, p. 30.
  • 130
    Professor Amanda Third, Young and Resilient Research Centre, Committee Hansard, 21 December 2021, p. 30.
  • 131
    Ms Frances Haugen, Committee Hansard, 3 February 2022, p. 6.
  • 132
    Home Affairs, Submission 40, p. 5.
  • 133
    Meta, Submission 49, p. 3.
  • 134
    Meta, Submission 49, p. 3.
  • 135
    Meta, Submission 49, p. 67.
  • 136
    Twitter, Submission 50, p. 6.
  • 137
    Twitter, Submission 50, p. 5.
  • 138
    DIGI, Submission 46, p. 17.
  • 139
    DIGI, Submission 46, p. 17.
  • 140
    DIGI, Submission 46, p. 18.
  • 141
    Digital Rights Watch (DRW), Submission 23, pp 11-12.
  • 142
    DRW, Submission 23, p. 13.
  • 143
    Dr Michael Salter, Committee Hansard, 18 January 2022, p. 12.
  • 144
    Ms Sonya Ryan, CRF, Committee Hansard, 21 December 2021, p. 9.
  • 145
    Office of the eSafety Commissioner, End-to-end encryption trends and challenges – position statement, 11 May 2020, available at: https://www.esafety.gov.au/industry/tech-trends-and-challenges/end-end-encryption (accessed 6 February 2022).
  • 146
    Office of the eSafety Commissioner, End-to-end encryption trends and challenges – position statement, 11 May 2020, available at: https://www.esafety.gov.au/industry/tech-trends-and-challenges/end-end-encryption (accessed 6 February 2022).
  • 147
    Office of the eSafety Commissioner, End-to-end encryption trends and challenges – position statement, 11 May 2020, available at: https://www.esafety.gov.au/industry/tech-trends-and-challenges/end-end-encryption (accessed 6 February 2022).
  • 148
    Office of the eSafety Commissioner, End-to-end encryption trends and challenges – position statement, 11 May 2020, available at: https://www.esafety.gov.au/industry/tech-trends-and-challenges/end-end-encryption (accessed 6 February 2022).
  • 149
    Home Affairs, Submission 40, p. 5.
  • 150
    Home Affairs, Submission 40, pp 5-6.
  • 151
    Home Affairs, Submission 40, p. 6.
  • 152
    Dr Michael Salter, Committee Hansard, 18 January 2022, p. 12.
  • 153
    Ms Sonya Ryan, CRF, Committee Hansard, 21 December 2021, p. 3.
  • 154
    Ms Kathryn Mandla, Head, Advocacy and Research, yourtown, Committee Hansard, 21 December 2021, p. 33.
  • 155
    Home Affairs, Submission 40, p. 6.
  • 156
    Dr Michael Salter, Committee Hansard, 18 January 2022, pp 12-13.
  • 157
    Home Affairs, Submission 40, p. 6.
  • 158
    National Society for the Prevention of Cruelty to Children, ‘Open Letter to Mark Zuckerberg’, available at: https://www.nspcc.org.uk/globalassets/documents/policy/letter-to-mark-zuckerberg-february-2020.pdf
  • 159
    National Society for the Prevention of Cruelty to Children, ‘Open Letter to Mark Zuckerberg’, available at: https://www.nspcc.org.uk/globalassets/documents/policy/letter-to-mark-zuckerberg-february-2020.pdf
  • 160
    Home Affairs, Submission 40, p. 6.
  • 161
    DRW, Submission 23, p. 14.
  • 162
    DRW, Submission 23, p. 14.
  • 163
    Ms Christine Morgan, Chief Executive Officer and Prime Minister’s National Suicide Prevention Adviser, National Mental Health Commission (NMHC), Committee Hansard, 21 January 2022, p. 8.
  • 164
    Ms Lyndall Soper, Deputy Chief Executive Officer, NMHC, 21 January 2022, Committee Hansard, p. 8.
  • 165
    WESNET, Submission 25, p. 3.
  • 166
    DRW, Submission 23, p. 5.
  • 167
    DRW, Submission 23, p. 6.
  • 168
    Ms Carly Findlay AM, Committee Hansard, 22 December 2021, p. 5.
  • 169
    Ms Angelene Falk, Information Commissioner and Privacy Commissioner, Office of the Australian Information Commissioner, Committee Hansard, 28 January 2022, pp 19-20.
  • 170
    Office of the Australian Information Commissioner, Australian Privacy Principles, available at: https://www.oaic.gov.au/privacy/australian-privacy-principles/read-the-australian-privacy-principles (accessed 9 February 2022).
  • 171
    Mr Michael Johnson, Assistant Secretary, Defamation Taskforce, Attorney-General’s Department, Committee Hansard, 28 January 2022, p. 39.
  • 172
    Ms Sonya Ryan, CRF, Committee Hansard, 21 December 2021, p. 5.
  • 173
    Ms Sonya Ryan, CRF, Committee Hansard, 21 December 2021, p. 6.
  • 174
    Ms Erin Molan, Committee Hansard, 18 January 2022, p. 4.
  • 175
    Mr Stephen Bendle, General Manager, Dolly’s Dream, Committee Hansard, 27 January 2022, p. 2.
  • 176
    Professor Amanda Third, Young and Resilient Research Centre, Committee Hansard, 21 December 2021, p. 27.
  • 177
    Professor Amanda Third, Young and Resilient Research Centre, Committee Hansard, 21 December 2021, p. 27.
  • 178
    Meta, Submission 49, p. 78.
  • 179
    Meta, Submission 49, p. 78.
  • 180
    Meta, Submission 49, p. 78.
  • 181
    Meta, Submission 49, p. 83.
  • 182
    Meta, Submission 49, pp 83-84.
  • 183
    Google Australia, Submission 30, p. 7.
  • 184
    Snap Inc., Submission 16, p. 1.
  • 185
    Twitter, Submission 50, p. 28.
  • 186
    Meta, Submission 49, p. 30.
  • 187
    Google Australia, Submission 30, p. 9.
  • 188
    Meta, Submission 49, p. 30.
  • 189
    Meta, Submission 49, p. 30.
  • 190
    Meta, Submission 49, p. 30.
  • 191
    Meta, Submission 49, p. 34.
  • 192
    Meta, Submission 49, p. 32.
  • 193
    Meta, Submission 49, p. 32.
  • 194
    Meta, Submission 49, pp 31-34.
  • 195
    DRW, Submission 23, p. 7.
  • 196
    Google Australia, Submission 30, p. 9.
  • 197
    Google Australia, Submission 30, p. 9.
  • 198
    Google Australia, Submission 30, p. 10.
  • 199
    Ms Tracey McAsey, Manager, Daniel Morcombe Foundation, Committee Hansard, 21 December 2021, p. 14.
  • 200
    House Standing Committee on Social Policy and Legal Affairs, Protecting the age of innocence: Report of the inquiry into age verification for online wagering and online pornography, February 2020, pp 71-72 (Recommendation 3).
  • 201
    House Standing Committee on Social Policy and Legal Affairs, Protecting the age of innocence: Report of the inquiry into age verification for online wagering and online pornography, February 2020, pp 88-89 (Recommendation 4).
  • 202
    Australian Government, Australian Government response to the House of Representatives Standing Committee on Social Policy and Legal Affairs report: Protecting the age of innocence, 1 June 2021, available at: https://www.aph.gov.au/DocumentStore.ashx?id=7a1aa6f4-b43b-4687-8e42-6686ce350beb (accessed 7 March 2022).
  • 203
    Family Zone, Submission 15, p. 3.
  • 204
    Professor Amanda Third, Young and Resilient Research Centre, Committee Hansard, 21 December 2021, p. 27.
  • 205
    Reset Australia, Submission 12, p. 24.
  • 206
    Reset Australia, Submission 12, p. 24.
  • 207
    Dr Michael Salter, Committee Hansard, 18 January 2022, p. 13.
  • 208
    Dr Michael Salter, Committee Hansard, 18 January 2022, p. 13.
  • 209
    Dr Michael Salter, Committee Hansard, 18 January 2022, p. 14.
  • 210
    The Isolated Children’s Parents’ Association of Australia, Submission 67, p. 1.
  • 211
    Meta, Submission 49, p. 19.
  • 212
    Meta, Submission 49, p. 19.
  • 213
    Dr Michael Salter, Committee Hansard, 18 January 2022, pp 12-13.
  • 214
    Dr Michael Salter, Committee Hansard, 18 January 2022, pp 12-13.
  • 215
    Ms Sonya Ryan, CRF, Committee Hansard, 21 December 2021, p. 5.
  • 216
    Ms Sonya Ryan, CRF, Committee Hansard, 21 December 2021, p. 6.
  • 217
    Google Australia, Submission 30, p. 17.
  • 218
    Meta, Submission 49, p. 71.
  • 219
    Meta, Submission 49, p. 9.
  • 220
    Meta, Submission 49, pp 72-27.
  • 221
    Meta, Submission 49, p. 71.
  • 222
    Meta, Submission 49, p. 77.
  • 223
    Twitter, Submission 50, p. 11.
  • 224
    TikTok Australia, Submission 57, p. 3.
  • 225
    Snap Inc., Submission 16, p. 3.
  • 226
    Dr Michael Salter, Committee Hansard, 18 January 2022, p. 12.
  • 227
    Committee Hansard, 3 February 2022, p. 6.
  • 228
    Mr Brandon Silverman, Committee Hansard, 28 January 2022, pp 43-44.
  • 229
    Ms Frances Haugen, Committee Hansard, 3 February 2022, p. 6.

 |  Contents  |