2. You Have A New Notification

The Forms and Impacts of Online Harm

Overview

2.1
The short time in which the internet has been a presence in the lives of everyday Australians has been transformative in unprecedented and extraordinary ways. One witness described the impact of digitisation as ‘akin to what happened in the industrial revolution centuries ago. It, effectively, is changing anything and everything that we do’.1
2.2
Digital services offer Australians a vast number of benefits, including communication, work and educational opportunities, networking,
e-commerce and others. Accordingly, in examining online harm, the Committee is cautious to avoid making a blanket assumption or finding that the entirety of the online world, or particular online platforms, are inherently negative or damaging.
2.3
Nonetheless, the Committee heard extensive evidence suggesting that online harm is rampant on digital spaces. Victims of online abuse indicated that harmful content in digital settings had resulted in significant and lasting impacts, ranging from psychological harm to the impact on life choices such as careers to fears for personal safety. This situation constitutes an urgent threat to the digital and personal wellbeing of Australians.
2.4
This chapter examines the nature of online harm. It then outlines who is most at risk of being a victim of online harm, and examines vulnerable groups that are at particular risk for online harm. The chapter concludes by examining the effects of online harm on victims.
2.5
Readers are advised that this chapter contains material that may be distressing, including references to and examples of forms of abuse.

The digital world as a medium for mixed experiences

2.6
In examining online harm, it is important to recognise that not all interactions in the digital space are negative or abusive. For many Australians, the online world has been a source of positivity and social connection.
2.7
The Office of the eSafety Commissioner (eSafety) strongly advocated this message in its submission, stating that the internet and social media platforms offer users with countless sources of positive benefits:
Social media connects people with the world around them, as well as with their communities and their families. The stark and isolating nature of the pandemic has crystalised the need to access these channels. Forty-nine percent of Australians were either born overseas or have families overseas, and the online world can help keep them connected. Similarly, online connectivity remains critical for regional and remote communities, and for our Aboriginal and Torres Strait Islander communities who want to remain connected to country and culture. In addition, we know the benefits to neurodiverse young people from engaging online are evident both inside and outside of formal education. Being online helps them to develop social skills and offers ways to expand and enrich offline interests.2
2.8
The National Mental Health Commission (the NMHC) highlighted that digital platforms can be a source of good experiences as well as harmful ones, and that society needed to move towards a form of engagement with social media in particular which ‘enhances mental health and wellbeing’.3 The NMHC pointed out many benefits of online engagement, such as social engagement, education and employment opportunities.4 Further, studies indicate that children and young people predominantly use social media for ‘communication, connection and sharing with others’.5 The NMHC confirmed this, stating that community and social engagement was of critical importance to individual and societal wellbeing, as was seen particularly clearly during the COVID-19 pandemic.6
2.9
The Alannah and Madeline Foundation (AMF) agreed with this statement, stating that children and young people in particular use internet services and products to explore their personal development.7 The NMHC similarly reported young people’s experiences of finding their ‘tribe’ online, which they argued was particularly important due to the widely dispersed geography of Australia, and the consequent social isolation for those in rural and regional communities.8
2.10
While online spaces have the potential for positive experiences, a broad range of negative experiences were reported to the Committee. The remainder of this chapter examines online harm in depth.

Definition of ‘online safety’ and ‘online harm’

2.11
An internationally accepted definition of ‘online harm’, including types and definitions of common forms of harm, currently does not exist.9 The eSafety Commissioner noted that the Online Safety Act 2021 (Cth) (the OSA) sets out specific types of online harm, but that:
At this stage, it is largely up to individual online service providers to establish rules and guidelines for this type of activity and content that is or is not permitted on their platforms within community guidelines or terms of service. However, these can diverge significantly across services.10
2.12
As stated in Chapter 1, the OSA defines ‘online safety for Australians’ as ‘the capacity of Australians to use social media services and electronic services in a safe manner’.11 ‘Online harm’ can therefore encompass a broad range of conduct across differing platforms and online spaces, which is ever-changing in a highly dynamic digital environment.

Typology of online harm

2.13
Interacting with the online world can result in different types of harm. The three main recognised types of harm are:
Individual harm;
Community-based harm; and
Economic harm.
2.14
This chapter will examine these types of harm, focusing in particular on individual harm.

Individual harm

2.15
Individual harm arises due to an individual person’s interactions on the internet. Harm can also be experienced while engaging with services such as social media, visiting websites containing illegal or disturbing content, and other situations. A non-exhaustive range of situations that can cause harm include:
The production and distribution of child sexual abuse material (CSAM), including children and young people being targeted or ‘groomed’ by perpetrators;
Cyberbullying, abuse or harassment: Where one person harasses, threatens, intimidates or name-calls another person using an internet service (which also includes volumetric attacks, where a large group of people attack one individual);
Exposure to illegal or disturbing content, such as child sexual abuse material (CSAM), violent or abhorrent content, or material promoting harmful or dangerous behaviours (e.g. suicide ideation, promotion of eating disorders);
Discrimination on the basis of sex, gender, sexual orientation, ethnic background, religious belief, political views, and others;
Technology-facilitated abuse (including the non-consensual distribution of explicit images, deep-fake or cheap-fake image abuse, cyber-flashing, utilising tracking devices or software to monitor a person without consent, and controlling access to accounts or technology); and
Identity theft or imitation, including people using fake social media accounts of others.
2.16
These forms of harm may not be isolated to a singular type or platform; material can be dispersed across multiple platforms, and victims may be unaware of how far material has circulated or on what platforms material exists.12
2.17
The below section provides a brief outline of the main types of individual harm.

Cyberbullying and cyber abuse

2.18
Cyberbullying and cyber abuse13 includes ‘online communication to or about someone which is menacing, harassing or offensive and also intended to cause serious harm to their physical or mental health’.14
2.19
The OSA sets out the elements required to establish cyberbullying for adults and children, one form of online harm. For adults, online harassment is described as cyber abuse, which consists of the following elements:
The material being provided on a social media, relevant electronic, or designated internet service;
A finding that an ordinary reasonable person would conclude that the material distributed was intended to have an effect of causing serious harm to a particular Australian adult;
A finding that an ordinary reasonable person in the position of that adult would consider the material being (in all the circumstances) menacing, harassing or offensive; and
Any further conditions set out by the legislative rules.15
2.20
For children, similar conduct is described as cyberbullying to reflect how online abuse is experienced differently by children. Section 6 of the OSA outlines the definition of cyberbullying directed at an Australian child, which largely replicates the elements of section 7. The thresholds of what is considered serious harm, however, are lower due to children’s developmental stages.
2.21
Both cyber abuse and cyberbullying include a range of behaviours, including:
abusive texts and emails
hurtful messages, images or videos
imitating others online
excluding others online
humiliating others online
spreading nasty online gossip and chat
creating fake accounts to trick someone or humiliate them.16
2.22
Serious forms of online abuse targeting adults can include:
being harassed and threatened with violence because of their physical appearance, religion, gender, race, disability, sexual orientation or political beliefs
finding their personal contact details have been made public on a social media service or other online platform in order to scare, harass or attack them
being threatened with serious harm and other people online being encouraged to join in
being stalked and threatened online, particularly in the context of domestic and family violence
being encouraged to harm themselves, particularly in cases where they are known to be at particular risk (for example, because they have a mental health condition)
repeatedly being sent obscene and threatening messages as part of ongoing harassment.17
2.23
For cyberbullying targeting children, types of behaviour which fall under the definition used by eSafety includes ‘online communication to or about an Australian child which is seriously humiliating, harassing, intimidating or threatening’.18 Examples of such conduct include ‘abusive texts and emails; hurtful messages, images or videos; excluding others; spreading nasty gossip and chat; or creating fake accounts to trick or humiliate someone’.19
2.24
Cyberbullying and online abuse can involve large numbers of people or coordinated attacks. eSafety states that this behaviour, known as ‘volumetric attacks’ (or ‘pile-ons’ or ‘brigades’), can be among the most serious forms of cyberbullying and online abuse.20

Technology-facilitated abuse

2.25
Technology-facilitated abuse is a form of domestic and family violence, and is defined as involving ‘misuse of devices (such as phones, devices and computers), accounts (such as email) and software or platforms (such as social media) to control, abuse, track and intimidate victim-survivors’.21
2.26
Technology-facilitated abuse has now been recognised by the United Nation’s Special Rapporteur on violence against women, its causes and consequences as having ‘facilitated new types of gender-based violence and gender inequality’.22
2.27
eSafety’s research indicates that women and their children who are experiencing domestic and family violence ‘almost always experience technology-facilitated abuse designed to extend coercion and control over their lives’.23 This was confirmed by the Women’s Services Network (WESNET), who reported that a 2020 study found that almost all survey participants had experienced technology-facilitated abuse.24
2.28
Methods of technology-facilitated abuse included text messaging, tracking apps, FaceTime and iCloud, and the misuse of government accounts such as MyGov.25 WESNET also reported that the study indicated that women were regularly forced to film and record intimate images, suggesting that image-based abuse was being utilised by perpetrators.26
2.29
Australia’s National Research Organisation for Women’s Safety (ANROWS) outlined some of the other forms that technology-facilitated abuse can take:
Some of the other interesting research that we've come across is when people are transferring money. The banks have reported that even the comments in bank transfers have been used to perpetuate abuse. So there are many ways of using technology to continue the abuse and that relationship. It could be using devices in toys to stalk or monitor but also using social media, pretending to be a friend, pretending to be someone else as a way of both targeting children and targeting, usually, the mother. So there's certainly a case for thinking about the threshold, because often it might not meet that threshold, and also about a more fulsome understanding by service providers of how technologies are used to perpetuate abuse.27
2.30
Evidence suggests that technology-facilitated abuse is markedly different from other forms of abusive behaviour due to the capacity for harm to be committed immediately, more easily and with greater reach.28 Children are also either likely to be victims of technology-facilitated abuse, or used as a perpetrator’s method of monitoring the victim and continuing the abuse.29
2.31
The impact of technology-facilitated abuse is all-encompassing for many victims, as ANROWS highlighted:
The constant monitoring and abuse enacted through technology creates a sense of omnipresence for victims, making it feel as though they're constantly being watched by their perpetrators. Workers said this made victims hypervigilant and fearful and made them feel as if the abuse would never end or that they would never be able to escape. Technology-facilitated abuse has the ability to impact all facets of victims' lives. They don't feel safe at home, work, while studying or in social interactions.30
2.32
Witnesses also noted that technology-facilitated abuse is often part of a broader pattern of domestic and family violence and accordingly needed to be considered in the context of domestic and family violence more broadly. Ms Karen Bentley, CEO of WESNET, stated that:
From our perspective, the problem that we've got is that for a survivor who's experiencing domestic and family violence or some other form of gender based violence the abuse that they experience online through technology is one aspect of a wide range of tactics that the abuser will use to control and coerce the victim. When we have approaches which try to just deal with one aspect, everything else moves on the other side, so it's vitally important that we have a much greater understanding about the impact and the dynamics of how domestic and family violence actually works. Our responses to it can't be the cybersafety training ones that we have for the average person who is trying to stop a nameless, faceless Russian hacker—for want of a better example—and to protect their online presence. These are very targeted. They have intimate knowledge of the victim and will use and target the victim very, very directly.31
2.33
WESNET further highlighted the immense impact of domestic violence perpetrators using online platforms to target or harass victims:
…violence against women is not acceptable in any form on any platform and it has really devasting consequences online as well as in real life… people can be anonymous online and troll the hell out of people, and many of our survivors are badly affected by the fact that those people are doing this abuse anonymously and they can’t be found and tracked.32

Image-based abuse

2.34
Image-based abuse, while considered a form of technology-facilitated abuse, is predominantly in relation to the sharing or threatened sharing of intimate images without consent. Otherwise known as ‘revenge porn’, intimate images that fall under the definition provided in legislation include the exposure of:
a person’s genital area or anal area (whether bare or covered by underwear); a person’s breasts (if the person identifies as female, transgender or intersex); private activity (for example, a person undressing, using the bathroom, showering, bathing or engaged in sexual activity); or a person without attire of religious or cultural significance if they would normally wear such attire in public.33
2.35
New technologies have further complicated image-based abuse due to the emergence of ‘deepfake’ technology. A ‘deepfake’ is a:
digital photo, video or sound file of a real person that has been edited to create an extremely realistic but false depiction of them doing or saying something that they did not actually do or say.34
2.36
eSafety states that deepfakes have the potential for positive uses, such as for entertainment, education and medical purposes, but that such technology poses significant risks when used harmfully. eSafety’s position statement on deepfakes explained that they can be used for a range of purposes that are damaging to victims, such as:
Creating ‘fake news’ or hoaxes;
Producing falsified pornography material;
Stealing someone’s identity or impersonating someone; and
Extorting victims by creating fake material of them and then threatening to release it to their contacts.35

Online child sexual exploitation

2.37
Online child sexual exploitation includes a range of behaviour and offences which relate to the grooming and sexual abuse of children. Types of behaviour that fall under the definition of online child sexual exploitation include:
An adult engaging a child in a chat about sexual acts
An adult sending nude or pornographic images of themselves to a child or exposing themselves via live streaming
Asking a child to view pornographic images/videos
Asking a child to perform sexual acts, expose themselves or share a sexual image
Taking or making and sharing or showing indecent images of children.36

Violent or abhorrent content

2.38
The production and publication of violent or abhorrent material online, particularly in relation to livestreaming, has been of legislative scrutiny since the Christchurch terrorist attack in March 2019. The legislative definition of such content states that abhorrent violent material is considered as:
audio and/or visual content produced by a perpetrator or accomplice of a terrorist act involving serious physical harm or death, murder or attempted murder, torture, rape or kidnapping involving violence.37
2.39
The legislative definition contains a number of exceptions, such as bystander coverage, journalism, research and artistic purposes.38
2.40
eSafety states that such material poses significant risk to online users, including increased trauma and suffering to victims and their families, potential radicalisation and extremism, and the potential for these kinds of content to be used to incite fear.39

Promotion of harmful behaviours

2.41
Content produced online has the capacity to encourage or promote destructive or unhealthy behaviours for vulnerable users. eSafety points to the examples of self-harm, suicide and eating disorders as topics which fall under this category.40
2.42
Suicidal ideation is a common topic of concern in relation to online harm. Orygen stated that social media ‘has the potential for increasing the risk of contagion or copycat behaviours and sharing information about suicide methods’.41
2.43
Further, online content depicting eating disorders (such as anorexia and bulimia) in a positive light, promoting disordered behaviour, or providing instruction in disordered eating, can encourage eating disorders for vulnerable people.42

Community harm

2.44
Harm that impacts the community refers to situations where content online has the potential to cause harm, or causes actual harm, to the community. This form of harm encompasses situations such as:
Inciting violence or hatred against particular groups, such as gender, racial or disability groups (also known as hate speech);
Promoting or distributing material relating to extremism and terrorism, including live-streaming violent attacks; and
Spreading misinformation, disinformation, or encouraging mistrust in government institutions.
2.45
Further, types of individual and economic harm can have broader social impacts which result in community harm. For example, CSAM impacts the individual victim, but also can impact the victims’ family and support networks, and more broadly can cause fear, anxiety and anger in the local community.

Online hate

2.46
Online hate is a term that covers a range of extremely harmful practices that have been known to proliferate on social media platforms in addition to other digital arenas. While no legal standard exists for the definition of ‘hate speech’ in Australia,43 eSafety defines online hate as ‘any hateful posts about a person or group based on their race, religion, ethnicity, sexual orientation, disability or gender’.44 The range of forms that online hate can take include:
Discrimination;
Hate speech;
Racism;
Misogyny, misandry and other forms of sex-based discrimination;
Sexual harassment; and
Homophobia or other forms of discrimination based on sexual orientation.45
2.47
The eSafety Commissioner makes a distinction between discrimination as opposed to hate speech, noting that while discrimination can be targeted against an individual, hate speech targets an entire group.
2.48
Harmony Alliance stated that groups who ‘do not have the same access to power and privilege as the dominant groups’ are likely to experience discrimination in various forms in online settings.46 They pointed to research conducted by eSafety which approximated that one in seven (approximately 14 per cent) adults online had been targeted by online hate speech on a social media platform in the period between August 2018 and August 2019.47
2.49
Ms Nyadol Nyuon, incoming Chair of Harmony Alliance, pointed to examples such as the targeting of African groups on social media platforms in 2016 which attracted nationalistic and neo-Nazi groups.48 She also highlighted her own experiences of being racially abused, explaining how she had received comments ‘calling for the culling of people who looked like me’ and being attacked by a police officer who ‘called me an ignorant C-word who should F-off back to the war-torn shithole country I came from’.49
2.50
Focusing particularly on the experiences of migrant and refugee women in Australia, Harmony Alliance stated that online discrimination can take many forms, such as:
insulting, humiliating, demeaning or offensive comments – both directed towards them as individuals and to their communities;
derogatory language;
threats of sexual and physical violence;
threats against children;
death threats;
online stalking;
distributing personal contact details online (doxing); and
image-based abuse (for example, the non-consensual sharing of intimate or false photos online).50
2.51
Harmony Alliance also noted that a complicating factor for migrant or refugee groups was that some forms of harassment or abuse were conducted in languages other than English, making it difficult to monitor and detect on platforms which are monitored primarily by English speakers. Reporting abuse was also said to be difficult due to moderators being unable to understand the language or the contextual background. 51
2.52
Some witnesses pointed out that discrimination was often not isolated to a particular characteristic of a person, and could be in relation to a number of other identity-related factors, such as gender, religion and sexual orientation.52
2.53
Harmony Alliance also noted that certain groups had experienced heightened levels of discrimination and hate speech during the COVID-19 pandemic. They pointed to evidence which suggested that Asian-Australians, in addition to young migrant and refugee women, had experienced an increase in abuse and discrimination in online settings.53

Disinformation and misinformation

2.54
Disinformation and misinformation on social media platforms is an issue receiving growing attention.
2.55
What constitutes ‘disinformation’ and ‘misinformation’ is not universally agreed. The Digital Industry Group Inc. (DIGI) noted that there is currently no consensus across stakeholders as to what these terms should include.54 The Centre for Digital Wellbeing (CDW) provided the following definitions:
Misinformation is a term used to describe content or information that is false but was created or shared without the intent to cause harm. Misinformation is false or out-of-context information that is presented as fact, and can include made-up news articles, false information shared on social media platforms, doctored images and videos, and scam advertisements.
Conversely, disinformation is the purposeful or deliberate creation and dissemination of false information with the intention to mislead or cause harm. Disinformation can take many forms. It can include false or fake news content or fake news sites, images or text that are altered or distorted, or videos or commentary that include elements of fact mixed with elements of falsehood or exaggeration. Disinformation can also include real material used within a context that presents a distorted view of reality, such as a clip of a speech that is given a new and false attribution of meaning. The amalgamation of false information with truth is a common tool used in disinformation campaigns and is highly effective as a tactic of influence.55
2.56
The CDW explained that disinformation and misinformation are spread using a number of technological tools, such as:
Tools to automatically generate news articles, post on social media and engage with others (known as ‘bots’) which behave identically to humans online, making them difficult to detect;
‘Bot farms’, which are large groups of bots which are designed to work in a coordinated way to provide the appearance of truthfulness, persuasiveness and popularity, while also boosting the content’s reach on algorithm-generated social media sites; and
Troll farms, which are groups working together to upload content which is often ‘inflammatory, divisive and false’ to social media services.56
2.57
The Australian Communications and Media Authority stated that disinformation and misinformation have the capacity to significantly damage people on an individual level and collectively:
[m]isinformation can pose a risk to people’s health and safety. We have seen this with misinformation about COVID-19 and 5G technology … [o]nline, there is such a large amount of information from different sources that it can be hard to know who or what to believe. It may not be clear where the information has come from, who wrote it, or when it was produced. When we share something online, we do not always stop to think whether it is true. Misinformation can be new, surprising, or emotive. This can make us more likely to share it and it can often spread faster than the facts.57

Extremism and terrorism

2.58
Submitters to the inquiry argued that the proliferation of social media services’ algorithms which amplify extreme and sensationalist content has resulted in an increasing trend in society towards extremism.58 While extremism and terrorism have existed outside the internet for centuries, social media and digital platforms have provided a vehicle for those with extreme beliefs or ideologies to meet and advertise their cause.
2.59
The Department of Home Affairs (Home Affairs) outlined its concerns in relation to the spread of violent extremism via social media platforms and digital services:
Digital platforms present deep-seated challenges for Australia’s national efforts to contest and prevent violent extremism. Increasingly, violent extremists from across the ideological spectrum seek to use online methods to spread extreme and harmful propaganda, seed division and recruit individuals.59
2.60
Home Affairs stated that social media services are regularly used as ‘a major conduit for terrorism and violent extremist content’, but that due to Australian and international law enforcement’s efforts in removing content, such content is now shifting towards platforms which are less willing or able to remove it.60
2.61
The CDW provided an example of a recent study which highlighted concerns regarding social media services and their impact on extremism:
In one internal Facebook study, a researcher created a Facebook account for a fictional 41-year-old conservative mother with an interest in ‘young children, parenting, Christianity, Civics and Community’. After this fictional account liked memes and joined conservative groups on the first day, Facebook began recommending almost exclusively right-wing content on the second day. By the fifth day, it was recommending QAnon content and right-wing conspiracy theories. Facebook’s internal research found similar effects for a fictional liberal user.61
2.62
Similar findings were also reflected in the New Zealand Royal Commission’s Inquiry into the 2019 terrorist attack in Christchurch, which identified that extremists and terrorists were utilising social media platforms to find one another, share information and spread their ideologies.62

Economic harm

2.63
Economic harm describes a situation where a range of types of financial harm are experienced due to conduct online. This includes a range of situations, such as:
Scams or frauds committed online;
Ransomware and other forms of technology-facilitated harm;
False advertisements or representations; and
False reviews of businesses which can cause financial hardship.
2.64
This form of harm, while important, is not examined in this report.

Prevalence of online harm

2.65
Currently available statistics indicate that digital spaces are saturated with online harm, with everyday Australians experiencing the repercussions. The eSafety Commissioner provided the following statistics in relation to online harm:
Research conducted in 2017 indicated that 11 per cent of Australians over 18 years of age have been the target of image-based abuse, the majority of which was directed towards women between the ages of 18 and 24 years of age.63
Between August 2018 and August 2019, 67 per cent of Australian adults had a negative experience online, ranging from unwanted online contact, security breaches, and hate speech or abuse;64
One in five young Australians have experienced cyberbullying behaviour and one in five Australian children or young people admit to cyberbullying behaviour;65 and
During the 2020-2021 reporting period, eSafety received in excess of 23,500 reports from the public in relation to illegal or restricted content online, the vast majority of which concerned CSAM.66
2.66
It is important to note that some of there statistics are at least five years old. Given the exponential rise in the number of social media and digital platforms and the increasing power of the existing companies, it is safe to assume that these statistics may be significantly out of date.
2.67
The level of awareness of social media and digital platforms regarding the full extent of online harm being caused is unclear. Many large social media platforms do not provide clear statistics of the level of harm present on their services. Twitter, for example, publishes regular Rules Enforcement Transparency Reports which detail the number and kind of breaches of its terms of service within a reporting period.67 Nonetheless, the reports do not provide an overall understanding of the number of users who experience online abuse or give an indication of the proportion of this behaviour across its platform. They also only represent the rate of detected abuse, which suggests that the true rates of online abuse are significantly greater if they are not detected.
2.68
In terms of bullying and harassment on its services, Meta stated that its latest Community Standards Enforcement Report found that the rate of bullying and harassment was 0.14-1.15 per cent on Facebook and 0.05-0.06 on Instagram, which meant that this form of harm was ‘seen between 14 and 15 times per every 10,000 views of content on Facebook, and between 5 and 6 times per 10,000 views of content on Instagram’.68 It is unclear how Meta identified these statistics.
2.69
These statistics provided by Meta offer information on only one form of harm that is present on social media platforms. Further, these statistics minimise the level of harm being caused by focusing only on the rate of identified bullying and harassment by views, rather than the impact on the victim which can be disproportionately extreme.

Who is most at risk online?

2.70
Online harm can be experienced by any user of the internet. Individuals from different backgrounds provided evidence to the Committee in relation to their experiences of online abuse. From high-profile media personalities to children, online harm does not discriminate.
2.71
Notwithstanding this, it is widely recognised that certain groups are significantly more likely to experience online harm or are more vulnerable to the effects of dangerous behaviour online. This section describes these groups and how online harm affects them.

Children and young people

2.72
Children are widely recognised as amongst the most at-risk groups in relation to online harm. The harmful content that children and young people are exposed to is diverse, including (but not limited to):
Online child sexual exploitation and children being contacted and groomed by abusive perpetrators;69
Accessing inappropriate content beyond a child or young person’s developmental level, which can lead to distress, desensitisation and other forms of harm (e.g. pornography, violence against people and animals, and self-generated sexual content);70
Terrorist and other extremist content;71
Disordered eating and body dysmorphia;72
‘Sextortion’, where a young person is coerced or willingly provides explicit images of themselves to another person, who then threatens to share the images with the victim’s friends or family unless the victim provides more explicit images;73
Discrimination, including racism, hate speech and homophobia;74 and
Cyber-bullying, harassment, stalking and other forms of harm aimed specifically at an individual user.75
2.73
Children and young people experience online harm at alarming rates. In recent research conducted by eSafety in August-September 2021, almost half of the surveyed children in its study had experienced hurtful or nasty treatment online within the past year, and one in ten children had been targeted with online hate speech. 76 The report found that children also routinely engage in risky behaviour online, finding that six in ten children had communicated with someone they first met online, one in eight children have sent a photo or video of themselves to a person they initially met online, and one in eight had met someone they had first met online.77 These proportions were also noted by eSafety to be a significant increase compared to rates observed in 2016, where children’s participation in risky activities was considerably lower.78
2.74
eSafety also examined the online behaviour of teenagers, finding that it was common for teenagers to be exposed to negative online content. The report stated:
Almost two-thirds of young people aged 14–17 were exposed in the past year to negative content, such as content relating to drug taking, suicide or self-harm, or gory or violent material.
Seven in ten young people aged 14–17 have seen sexual images online in the past year, while close to half have received sexual messages from someone online in the past year.79
2.75
eSafety’s findings were corroborated by other research. One study found that more than 70 per cent of vulnerable children and youth have witnessed harmful content online, such as violent or explicit content.80
2.76
Children’s safety organisations also reported that their activities indicated a high prevalence of exposure to online harm amongst children and young people. yourtown, the operators of Kids Helpline, explained that in 2020 approximately 4.5 per cent of calls it received included reference to cybersafety issues.81 yourtown stated that online safety concerns were most prevalent for clients under the age of 18 years old, which were often accompanied by concerns relating to bullying, mental health, and suicide ideation.82 From 2016 onwards, Kids Helpline has received approximately 209 contacts per year, particularly in the 13- to 18-year-old age group, in relation to online or texting-based sexual activity, including sexting and self-distribution of explicit images.83
2.77
The Carly Ryan Foundation (CRF) stated that twenty per cent of teenagers receive unwanted or inappropriate content, such as violent or sexual content, via online means.84 Other groups working directly with young people, such as The Daniel Morcombe Foundation (DMF), stated that in their experience that there had been a ‘definite increase’ in the rate of technology-assisted harmful sexual behaviours online.85
2.78
Further, children are entering online spaces such as social media at younger ages than has previously been observed.86 This may indicate that online harm could be starting earlier than previously observed.
2.79
The reasons why online harm presents as a uniquely dangerous threat to children and young people are complex. Ms Sonya Ryan, Chief Executive Officer (CEO) and Founder of the CRF, suggested that children and young people are innately willing to trust others and share information. Ms Ryan explained:
They have insecurities, they're looking for validation and they want to be connected and be part of something. Often those vulnerabilities and the conditioning provided to them from what they're seeing through media and the online world sets them up, potentially, for an amount of suffering, whether that be physical, emotional or mental, because they simply cannot live up to what they're seeing around them in the online space. They often don't feel like they're enough. When they're looking for that validation, it leaves them very vulnerable to inappropriate content and contact.87

Experiences of managing young people’s online behaviour

2.80
The Committee received evidence demonstrating that families are struggling to manage their children’s online behaviours. eSafety’s research found that parents were often unaware of the extent to which children and young people were accessing harmful content or experiencing online abuse, and were often unaware of how significantly these experiences were impacting their children. Disturbingly, eSafety found that parents had a significantly lower awareness of their children’s exposure to sexual material than the actual rate of children’s exposure.88
2.81
Parents were reported to feel pressured into granting their children access to social media platforms due to their peers having access.89 As the CDW noted:
Parents and carers are in an unenviable situation when it comes to regulating their children’s social media use. If they do not allow their children to use social media, their children may be excluded or socially isolated. However, allowing social media use may negatively affect their child’s mental health.90
2.82
Further, anecdotal evidence from child safety organisations suggested that parents and carers ‘frequently lack confidence in their ability to help children stay safe or to effectively deal with unsafe experiences’, a finding which was supported by eSafety’s research.91 This was corroborated by witnesses, who stated that parents often felt overwhelmed by the pace of technological development and ‘simply give up’, which can result in children ‘tak[ing] advantage of their parent’s limited focus, lack of tech awareness and lack of time’.92
2.83
Witnesses also noted that while there is software and technological tools for parents to monitor and control their children’s access to online services, the software could be ‘expensive, with monthly fees and occasionally bugs’, in addition to issues for parents such as time poverty, and technological illiteracy to ensure that the most appropriate software is used.93 Further, parents could potentially put too much trust in the software to adequately protect children and relax their vigilance.94
2.84
Schools were also identified as being recipients and managers of online harm issues, with varying degrees of success. eSafety stated that it had found that cyberbullying and other forms of online harm between young people often have roots in the schoolground, which sometimes results in a platform moderator being unable to understand the context of the online abuse.95
2.85
The AMF stated that schools receive complaints from parents in relation to potential cyberbullying incidents between students, and are often expected to intervene in these situations.96 The AMF suggested that this places significant pressure on school staff, who are expected to resolve the situation effectively.97
2.86
Body Safety Australia noted that schools experience issues when managing serious complaints between students, explaining that reporting potential offences can be complicated by the schools’ duty of care not just to the victim but also the offender.98 Body Safety Australia stated:
Victims who report to schools and see little or no response are re-traumatised by the lack of action as well as the necessity of facing their abuser every day in class. We have seen victims of online and offline abuse being unable to continue their education or being forced to change schools because of the effects and the inaction by schools. In many cases, online abuse will follow them to new schools, resulting in ongoing harm to their education and well-being.99
2.87
Box 2.1 provides a case study of harm in relation to children in online settings, particularly in how social media platforms respond to concerns regarding safety.

Box 2.1:   Case study: YouTube videos of young girls

Dr Michael Salter, an expert in child sexual exploitation and online abuse, provided an example of a YouTube video that had ‘gone viral’. In the video, a blogger described how he had identified a concerning trend about the platform and its use of algorithms.100
The video stated that children (or their parents or carers) were uploading innocuous videos of the children doing things such as performing gymnastics or dancing. These videos had attracted communities of paedophiles who had found the videos via search terms relating to children, and had then placed timestamps on them to enable other abusers to find the most explicit parts of a video (e.g. where a child inadvertently exposes a body part). The comments on these videos were highly explicit and enabled paedophiles to connect with one another. Further, due to YouTube’s algorithms which recommend material based on a user’s search and watch history, the platform recommended similar videos to users, essentially creating an ‘alternate’ side to the platform which enabled paedophiles to easily access more material.101
After it was alerted to this situation, the platform’s response was to de-monetise all videos on YouTube and turn off all comments in videos depicting children, which Dr Salter described as ‘just a mass and very blunt intervention into the problem’.102 YouTube (administered by Google) deleted the comments and accounts on the specific video, and removed thousands of videos containing inappropriate images of young people and channels containing inappropriate content. 103 Nonetheless, the platform declined to deactivate the algorithm-driven ‘recommendations’ system for this type of content due to the potential impact on content-producers who rely on the recommendations system for viewership.104

Women

2.88
Women are highly likely to be targeted for online abuse. eSafety’s research indicates that women and girls ‘face disproportionate levels of online abuse that is sexualised and violent’, making up two-thirds of complaints made to eSafety regarding cyberbullying, image-based abuse and adult cyber abuse.105
2.89
eSafety research about women in the workplace identified that women experience a range of online abuse, including:
Unwanted private messages;
Negative comments about their content;
Bullying or trolling;
Defamatory comments;
Offensive comments about race, ethnicity or gender;
Receiving slurs against their professional name;
Lies or rumours;
Stalking;
Impersonation or fake accounts;
Threats of real-life harm or abuse; and
Being the target of an ‘anti’ or ‘hate’ group.106
2.90
Women are more likely to experience online harm, particularly that which is gender-based in nature, such as image-based abuse, sexist and misogynistic harassment and abuse (including harassment involving appearance, virtue and fertility), and technology-facilitated abuse.107 Women are also likely to receive violent threats online, including generalised violence, rape and murder.108
2.91
Certain groups of women were stated by witnesses to be more likely to be subject to online abuse. Ms Nicole Shackleton stated that, according to her research, there were four subgroups of women which appeared to be more likely to receive abuse:
1
they occupy positions of power (such as politicians);
2
they draw attention to the ways that women’s experiences are rooted in systemic inequalities or they call out men, masculinity and the patriarchy for contributing to and benefiting from social injustices and inequalities (such as activists and some journalists);
3
they assert their right to occupy and participate in public spaces, particularly when they enter traditionally masculine spaces (such as women athletes and journalists); and
4
they step out of traditional gender roles, or they do not confirm to traditional ideas of femininity and beauty standards.109
2.92
Ms Sall Grover, founder and CEO of the app Giggle For Girls, outlined to the Committee her experiences of online abuse and how social media platforms have dealt with these instances:
One of the most horrific images I've ever received—my last name is Grover, which is obviously like a muppet from Sesame Street. Since I was born I've had a little Grover toy, and one person would send me images every day of Grover in a noose. It was just one of those things. Because I was born early, I was in an incubator, and they put a Grover toy in there to be my protector, so I've always had that image of it. Getting this image of Grover in a noose affected me more than just the words. For example, I got one the other day: 'I hope you burn in hell.' You get to a point where you have to develop a thick skin so it's water off a duck's back, but you do, at the same time, internalise it. As with every other piece of abuse I've received and with this muppet in a noose, I reported it to Twitter every time it happened. They always came back saying it has not violated their terms and conditions. I was like: How? At what point are your terms and conditions violated, because it seems to me that someone can go and post any kind of abuse that they want but when your focus maybe is on, say, misinformation and you'll ban people for that—which is an issue of itself; I understand—but abuse is also an issue, and it's part of the enjoyment of using a service. So they've done nothing. I've never, ever, ever had somebody removed or punished from Twitter for sending death threats or rape threats—ever.110
2.93
Women are also likely to be targeted in their workplace or in connection with their work. According to eSafety research, 35 per cent of women have experienced online abuse in relation to or as a consequence of their work.111 Of this cohort, certain groups were more likely to experience online abuse, such as those with an online or media public profile, those living with a disability, those identifying as LGBTIQA+, or those aged between 18 to 34 years old.112 Concerningly, women reported receiving abuse at work (such as via a work email) or by colleagues in the same industry.113 This had a detrimental impact on victims’ careers, such as feeling unsafe and less able at their jobs, reducing (temporarily or permanently) online activity, shying away from or declining leadership positions, or leaving their job or industry.114

Women in prominent positions

2.94
Women in public or prominent positions, such as journalists, politicians, sportswomen, and other public figures, have been identified as a particular group that experience higher levels of online abuse (see Box 2.2). The eSafety Commissioner stated that women in the public eye are recognised as receiving extreme forms of abuse:
… we have a social media self-defence program called Women in the Spotlight, which is targeting women who are politicians, journalists and in the public eye, because it's not just greater prevalence; it's the way that the content manifests. It's sexualised, it's violent; it's about appearance, supposed virtue, fertility. It's designed to humiliate and silence.115
2.95
Concurring with this point, Ms Nicole Shackleton stated that women in prominent positions are likely to experience gendered abuse on online forums, noting examples such former Prime Minister Julia Gillard, Victorian MP Fiona Pattern, and Federal Senator Dr Mehreen Faruqi as women in politics who have spoken out about the online abuse they have experienced.116
2.96
Ms Nicolle Flint MP, Member for Boothby, provided a submission to the Committee outlining her experiences of receiving gendered online abuse in addition to being stalked online and offline.117 She also pointed to other examples of women in prominent positions who had experienced abuse online, including journalists Ms Leigh Sales and Ms Van Badham, former Federal Minister the Hon. Kate Ellis, and wife of the current Prime Minister, Ms Jenny Morrison.118
2.97
Dr Kate Hall, Head of Mental Health and Wellbeing at the Australian Football League (AFL), suggested that both men and women in the public view are routinely objectified and dehumanised ‘because people project all of their own desires and wants on that individual’, which perpetuates abuse.119 Further, she stated that people in prominent positions often feel that they must accept the abuse and ‘toughen up’ as it is part of their job, which Dr Hall stated was psychologically unsafe and contributed to further trauma.120
2.98
Ms Erin Molan provided her experiences of online abuse to the Committee. She explained that, in her position as a sports journalist on The Footy Show, she had been subject to online abuse. She detailed abuse such as receiving negative and abusive comments via her social media feeds, and threatening direct messages, including one message where the person hoped to hurt her and her then-unborn child. These messages made Ms Molan fear for her safety and that of her family, but she did not speak publicly due to shame and embarrassment.121
2.99
Ms Molan approached the social media companies involved to attempt to stop the abuse but found it difficult to have her concerns taken seriously or for the abuse to be addressed. In one instance she reported a user who had threatened to try and ‘kill the child within my stomach, and they came back and said that it didn’t meet the threshold for inappropriate behaviour’.122
2.100
See Box 2.2 for a case study relating to online abuse targeting women in prominent positions.

Box 2.2:   Case Study: Ms Tayla Harris

Ms Harris is a prominent sportswoman for the Australian Football League Women’s (AFLW). Ms Harris has been subject to online abuse following the publication of a photo of her playing football in 2019, which she dubbed ‘kicking-photogate’. 123 The photo depicted Ms Harris performing a follow-through of a kick, in a classic AFL pose.
Ms Harris stated that, following the publication of the picture, she received sexualised and disturbing comments, tags and direct messages from largely anonymous accounts. She received insults based on her personal character, which she found distressing. Ms Harris also stated that this abuse was what she described as ‘a pile-on’, otherwise known as a volumetric attack.124
Ms Harris’ story went ‘viral’ around the world, and she continued to make comments in the public sphere about the nature of the abuse she was receiving, which further increased the attacks on her. Ms Harris expressed the view that some comments were aimed at silencing her.125 In regard to having the harmful material taken down, Ms Harris stated that she attempted to report the content to the platforms but found the process very difficult. She also expressed that she had hoped that live sports broadcasters or the AFLW would be able to moderate content and report abuse as it arose but recognised that it would require a non-stop effort to do so.126

Culturally and linguistically diverse people

2.101
People from culturally and linguistically diverse backgrounds (otherwise known as CALD communities) experience higher rates of online abuse than average, particularly in regards to hate speech and extremism. eSafety reports that eighteen per cent of people from CALD communities experience harm, compared to the national average of fourteen per cent.127
2.102
The Committee received powerful testimony from Ms Nyadol Nyuon, Incoming Director of the Sir Zelman Cowen Centre and Chair of Harmony Alliance, who detailed her personal experiences online. Ms Nyuon stated:
The first online attack I received came after my first-ever media appearances on national television. The abuse was predominantly racist in nature, and some of the abuse used such violent language, including calling for the culling of people who look like me. I remember taking screenshots of the pictures of some of the individuals who directed the worst abuse, hoping that, at the very least, I might avoid them in public.
The second attack was more sustained and reached every presence I had online. In what the eSafety Commissioner described at a Senate hearing as 'volumetric attack', I was tracked across all social media platforms and trolled predominantly with racist abuse. One came from a police officer, who called me an ignorant C-word who should F-off back to the war-torn shithole country I came from. He later apologised, and I accepted his apology. This time, though, the abuse and many things that were happening made me take three months off from work. The online abuse was not the only reason, but it played a substantial role in me taking the time to literally try to heal and reconnect again with a sense of safety. Because of that, I no longer share pictures of my children online, I prefer that my family members do not follow me online so they do not receive abuse, and I am constantly on watch to remove abuse that pops up on almost a daily basis.128
2.103
In addition to her personal experiences online, Ms Nyuon pointed to evidence suggesting that migrant and refugee women are at risk for online facilitated abuse, particularly in the context of family violence. She cited examples she had witnessed where men ‘continue their campaign of terror on women by abusing them online, sharing their personal images without consent’.129 Complicating factors in these situations include that these women may not speak English proficiently or at all, which means finding recourse for the abuse is limited. Further, the abuse can be conducted in languages other than English, which Ms Nyuon stated meant that hosting platforms often could not assist in the removal of the content because they did not understand the language being used.130
2.104
Further, Ms Nyuon suggested that social media provides an outlet for people to air ‘racism, racist bullying and discrimination based on colour’.131 She pointed to an example in 2016 where nationalistic and neo-Nazi groups used social media to issue threats towards African communities in Melbourne, and utilised the platforms to recruit people to do ‘night work’ to attack African youths.132

People living with disability or medical conditions

2.105
People living with disability or particular medical conditions are at higher risk of abuse online, which often tends to focus on their disability and/or their physical appearance.133
2.106
Ms Carly Findlay OAM outlined her experiences to the Committee, stating that she has had an active online presence since 1996 and has benefitted from this use, including through friendship, work opportunities and networking with the disability and facial difference communities. Ms Findlay expressed the view that for those in the disability community or other marginalised groups in society, the internet could be a place of safety and community.134
2.107
Ms Findlay had been wary of putting her photograph on the internet, as she has a rare facial difference condition called ichthyosis and had been subject to ridicule in offline settings.135 She stated that after she published her photograph for work-related purposes, it was then repurposed by other online users to mock and abuse her:
In December 2013 … I woke up to my photo being misused on Reddit. Reddit is a horrible cesspit of the internet … My photo was used on the 'what the fuck' forum. They were asking what the fuck had happened to my face. There were about 500 comments when I woke up, and they were all hideous. They were like, 'She looks like a glazed doughnut,' 'She looks like a lobster,' 'She looks like something my dog vomited up.' I sort of had a feeling that this would happen. I had a feeling my photo would be misused like this.136
2.108
Ms Findlay stated that she was able to satisfactorily manage the abuse herself after writing a Facebook post then publishing it as a direct response on Reddit, which ‘changed the conversation’.137 She reported other forms of abuse she had experienced, which included being stalked by a person she had engaged with online, a fake Instagram account being created with the intent of mocking her, and death threats.138
2.109
Ms Findlay also explained to the Committee that her interactions with digital platforms have given her the sense that the disability community is not supported. She explained that persons with facial differences often had content warnings applied to their photographs, which she attributed to platforms’ artificial intelligence systems automatically applying it.139 Ms Findlay also noted that it was very difficult for people with disabilities to be ‘verified’ by Twitter, as many of the requirements to have a verified account were ‘quite prohibitive in an ableist world’.140
2.110
While people living with disability are at risk online like most who utilise digital platforms, technology can be weaponised against them in particular ways. eSafety research suggests that women who live with disability are at higher risk for technology-facilitated abuse and in particular ways. These ranged from online harassment, to misusing their social media accounts, being monitor via spyware and other tracking technology, and image-based abuse. eSafety also found that perpetrators were often those who were closest to them, such as a partner or former partner, family members and carers.141

Aboriginal and Torres Strait Islander peoples

2.111
Aboriginal and Torres Strait Islander peoples experience hate speech on digital platforms at over three times the average rate for Australians online.142 eSafety states that Indigenous women in particular are more likely than the general population to experience technology-facilitated abuse, but that Indigenous women in remote and regional communities are less likely to be aware of the issue.143
2.112
Mr Chad Wingard, football player in the AFL, explained that the abuse he received online had a substantial impact on him, and outlined his approach to dealing with it:
If you're an Indigenous person or a person of colour or it's your sexuality or whatever it is you're being bullied about—I can only speak for being an Aboriginal person and a person of colour. However, my experience so far is that it takes a toll. It's draining and you think you'll let it slide or it's not the one that you think you need to call out. For me calling it out recently is because it affected me but not enough for me to give that person the limelight. It came to a point where I said, 'No, this is not on. I'm going to call out every single thing that happens now.' This is purely because I might be strong enough and have enough support around me to get through this, but I don't want 19-year-old kids coming from all over Australia who aren't capable and should not have to deal with this to even give these guys a chance.144

Other vulnerable groups

2.113
eSafety research suggests that other social groups are at higher risk of online harm. A non-exhaustive list of these groups include:
People who identify as LGBTIQA+ or gender-divergent, with rates similar to those experienced by Aboriginal and Torres Strait Islander peoples;145
People with particular religious beliefs;146 and
Older Australians.147

Repercussions of harms experienced online

2.114
Harm experienced by individuals online is not isolated to the internet. Online harm can have wide-reaching consequences which can impact a person’s life in a number of ways. This section outlines the ramifications of online harm on individuals and the broader community.

Range of impacts caused by online harm

2.115
Online harm has the potential to cause significant and diverse forms of harm to individuals, both in online and offline environments. eSafety suggests that the impacts of online harm can include:
Personal safety impacts – fear of psychological violence, physical violence and murder;
Emotional and social impacts – annoyance, anger, humiliation, shame, guilt, self-blame, deception, betrayal and/or fear;
Financial impacts – ability to work and earn an income, loss of financial security, restricted access to or knowledge of personal finances; and
Health and wellbeing impacts – anxiety, aggression, depression, self-destructive behaviour, physical health problems, intimate relationship difficulties, re-victimisation, disassociation, loss of self-esteem and confidence, withdrawal from social activities, lack of trust, substance abuse, ongoing trauma, self-harm and suicide.148
2.116
Other impacts resulting from negative online incidents experienced by adults include mental or emotional stress, or reputational damage.149
2.117
The impacts of online harm are also dependent on the type of harm experienced. For example, eSafety’s research indicates that victims who have experienced image-based abuse felt a range of emotions and impacts, finding that:
65% felt annoyed, 64% felt angry, 55% felt humiliated, 40% felt depressed and 32% felt afraid for their safety. It negatively affected the self-esteem of 42%, the mental health of 41% and the physical wellbeing of 33% of victims.150

Trauma

2.118
Victims of online abuse can experience trauma from their experiences, particularly in relation to the most serious forms of online harm such as online child exploitation. Symptoms of trauma include ‘fear, sleeplessness, paranoia, feelings of threat and lack of safety, and ostracism or social exclusion’.151
2.119
Dr Michael Salter explained that victims of online abuse can often experience trauma due to their experiences, depending on the type of harm and how long the harm continued for.152 He stated that trauma refers to a psychological injury, which can be in relation to a one-off incident or continuous or repeated harm; some forms of trauma, such as post-traumatic stress disorder, can result in ‘intrusive psychological symptoms’ such as nightmares or flashbacks, which would ultimately resolve over time with psychological treatment.153
2.120
Complex trauma, however, stems from ‘repeated betrayal and violation’, potentially involving violence, psychological and bodily invasion, and degradation.154 Dr Salter stated that complex trauma is often experienced for longer time periods, particularly where a victim is young and the trauma has been committed regularly and over time.155 For victims of child sexual exploitation, for example, Dr Salter stated that the trauma they experience is often lifelong, impacting on a victim’s sense of safety, ability to have safe relationships, and their mental and psychosocial wellbeing.156
2.121
Dr Salter further explained the application of complex trauma to victims experiencing online harm, and for children and young people in particular the harm can be overwhelming and all-consuming:
In terms of its link with online abuse, it's very typical that complex trauma is present for victims and survivors of online sexual exploitation for a range of reasons. There may have been offline abuse that then goes online, or there may have been online abuse—the child may have been induced into creating nude or sexual content. The continuing circulation of that material is extremely anxiety provoking and fear provoking. It's quite common for victims of online exploitation that they may be contacted repeatedly by abusers, who may in fact blackmail them and extort them with the content. It may be the same abuser or a different abuser. The fact of their online abuse may then become known to their peers, for example, at which point they may be subject to extensive bullying at school. There really can be the perception for this group that their life is destroyed. This is incredibly distressing for the young person, obviously, but also this anxiety, this trauma interferes with what we might say is a normal developmental pathway, their psychological but also physiological pathway. It interrupts psychological development and it interrupts neurological development; this then increases their risk of psychiatric and also autoimmune and other issues in adulthood.157
2.122
Mental health professionals told the Committee that they had witnessed the impact of trauma on victims of online abuse. Dr Kate Hall, AFL, stated that she had worked with a number of players who had demonstrated symptoms consistent with psychological trauma as a result of online abuse.158

Mental health

2.123
Online harm can impact mental health for victims, with increased risks for depression and anxiety commonly cited as harmful repercussions of online abuse.159
2.124
On one hand, evidence suggested that children and young people are increasingly at risk of experiencing mental health issues. A 2016 study by Mission Australia and the Black Dog Institute examining youth mental health over a five year period found that, since 2012, one in four young people in Australia is at risk of serious mental illness, representing a 4.1 per cent increase over the reporting period.160 Importantly, this study’s estimations may be considerably out of date given the impact of the increased presence of online platforms in young people’s lives since 2016.
2.125
The connection between the mental health of children and young people and their online activities, however, is yet to be made definitively. Notwithstanding the reported increase in demand for mental health services, the causal link between online harm and mental health is not clear. eSafety stated that it was critical to be cautious in drawing connections between online behaviour and users’ mental health:
It is important to take a nuanced and balanced view of children’s and young people’s experiences online and avoid drawing causal lines where they are not supported by evidence. The evidence before us suggests the relationship between mental health issues and social media use is complex. In fact, some usage can be positive and beneficial to mental health and wellbeing, while other usage patterns and experiences can be harmful.161
2.126
Professor Amanda Third noted that she had seen anecdotal evidence of extreme pressure on organisations that provide services to children, which was corroborated by groups such as the DMF and yourtown.162 She posited, however, that there were numerous factors that could be attributed to this rise that were external to online matters, including unemployment, income difficulties, or unequal access to education. 163 eSafety agreed with this observation, noting that a broader contextual understanding of young people’s lives was required in understanding mental health challenges:
Most research exploring the intersection between social media and mental health notes there are mediating factors ranging from personality, underlying mental health issues, age, gender, socio-economic background, ethnicity, level of parental engagement, and a person’s level of self-regulation in social media use.164
2.127
This point was also made by the NMHC, which stated that while mental health for young people was clearly in decline, the link to online usage was less clear. The NMHC stated that its work had identified that the ‘trajectory is that mental health and wellbeing of our young people has been declining steadily for the last few years’, beginning prior to, and accentuated by, the COVID-19 pandemic.165 In conducting studies on youth mental health, the NMHC explained that it had consulted with youth advocates and its technical advisory group in relation to social media and online usage and the extent to which it impacts on mental health. Its findings indicated that ‘it’s probably an amplifier, not a driver in and of itself’.166

Harm to psychological and physical development in children and young people

2.128
Witnesses reported that there are significant repercussions to children and young people’s exposure to unsafe or harmful content. Young people in particular are at risk of believing that the content they see online is representative of real life.
2.129
Ms Sonya Ryan, CEO of the CRF, stated that students today watch adult pornography to learn about sexual health and behaviour, which can lead to their believing that the sometimes violent or extreme content portrayed is normal. Ms Ryan stated that medical professionals have reported to the CRF that there are increasing rates of young people presenting with serious injuries, resulting from young people feeling ‘pressured into performing degrading and dangerous sexual activity’.167
2.130
Dr Michael Salter also stated that ongoing trauma can result in neurological and psychological development being interrupted, which can result in increased risk for autoimmune disease and serious psychiatric conditions later in life.168

Psychological and physical safety

2.131
At the most extreme end of harms, online harms can transverse into the offline world and pose a significant threat to a victim’s psychological and physical safety.
2.132
Victims of online harm can feel extreme fear and threat. Witnesses to the inquiry described feeling a sense of fear and terror as a result of being abused online.169 The impact of harms such as technology-facilitated abuse can result in victims feeling a sense of ‘exhaustion, despair and hopelessness’. 170 Targets of online abuse were also said to be reluctant to participate in online spaces due to fear of being attacked, and for some pulling out of social media engagement altogether to the detriment of their personal lives and careers.171
2.133
The sense of fear and threat can go further into fears for personal and physical safety. ‘Doxxing’, where a person’s private information (such as address or contact information) is released online maliciously, can result in threatening conduct such as stalking and harassment.172 One study found that participants who had been abused had been:
verbally and physically abused on the street following online harassment, having people come to the houses of women after they were doxxed, only to have their child open the front door, or having their animals killed on their private property.173
2.134
Witnesses also pointed to the example of the British Member of Parliament, Ms Jo Cox, who was murdered after she was abused, harassed and threatened online.174

Impact of COVID-19 on online safety

2.135
The COVID-19 pandemic necessitated a seismic shift for many Australians to move towards online spaces for work, education and communication with friends and family as a result of social restrictions. Some commentators have argued that the increased use of digital products resulted in online risks increasing.
2.136
Ms Kate Everett, founder of Dolly’s Dream, stated that the lockdowns resulting from the pandemic resulted in younger people having increased screen time, which exposes children at extremely young ages to online platforms.175 The Centre for Excellence in Child and Family Welfare also reported that older children were using online platforms significantly more as a result of social restriction measures, particularly in states such as Victoria.176
2.137
However, Professor Amanda Third stated that the concerns regarding proliferating online abuse during the pandemic may not be founded:
What we've seen in the context of the pandemic is an increased use of digital technology and really some of the reconfiguration of children's digital media practices over this two-year period. There is some hypothesis that, as a consequence of that intensified use of digital technology, there has been a heightened exposure of children to forms of online harm. Particularly, there are concerns about cyberbullying, about child sexual exploitation and so on. However, the evidence that we have to date is not yet concrete. Many of those impacts are not yet well documented by rigorous research. We have some strong indications that children may have been exposed to more intense forms of harm, but, at the same time too, we don't really know the full extent of that increased exposure and also we don't know whether the harms that potentially arise are going to extend into the future as we emerge beyond the pandemic. It is quite possible that, as we move out of the pandemic, we will recalibrate and that things will in a sense become more balanced.177
2.138
Further, while Dr Third acknowledged that the pandemic had been damaging to children’s mental health, there was not a clear causal link between the pandemic and children’s mental health. She explained:
[T]he challenge that's there for us is that the pressure—because we don't know enough yet—that's on children's mental health today is the output, if you like, of a very complex set of dynamics. It is very, very difficult in this scenario to point to causal connections. For example, it's very tempting to say, 'Children have spent a lot more time using social media platforms and other forms of technology to connect and to maintain their education and so on, and this could be a cause of the pressure that they're experiencing,' but of course there are also other shifts and changes happening in children's lives at the same time. Their physical activity has been inevitably reduced because they haven't had capacity to go outside and exercise in the ways they might have. They have had very little time face to face with peers and they've had huge disruptions to their routines and so on. As we unpack this question, there is a lot to be cognisant of and a lot of dynamics to hold in the balance.178
2.139
It was also noted by the Isolated Children’s Parents’ Association that for many children in regional or remote areas, the challenges posed by pandemic-related isolation were not new:
I think that we really need to remember that geographically isolated children have been doing this for a very long time. They have to have access to online platforms for their education, for starters. And then when a geographically isolated family sends their child to boarding school, they don't want them to be isolated at the boarding school; they wanted them to be able to have contact with their family and with the outside world while they are there. I think while safeguards need to be put in place, we also need to remember that these tools are powerful tools and very useful tools in some ways, so we need to be careful to ensure that the range of availabilities to them are used in a way that is effective and efficient in the unique circumstances that they find themselves in, because completely removing their access or limiting their access too much may lead to the opposite problem, whereby we have underhand things happening because we don't know about it.179

Committee comment

2.140
Positive outcomes can result for Australians of all ages and personal backgrounds from online use. The internet has produced, and will continue to develop, amazing and previously unthinkable uses to enhance the lives of people from all walks of life.
2.141
Having said that, the online abuse described by witnesses is unacceptable in modern society. The Committee does not find it in any way tolerable that the most vulnerable groups in society – including children and young people, women, migrants and refugees, people living with disability and others – often experience harm more than any other group. This situation further marginalises these groups by driving them out of the public square and denying them their rights to participate in public discourse as equal citizens.
2.142
Following this point, online culture does not exist in isolation from the offline world. This is reflective of the fact that the most vulnerable groups in offline society are the most likely to be the targets of abuse in online settings as well. This indicates that cultural change may not be possible online until it is addressed in the offline world as well. Broader change in society to ensure the fair, equitable and safe treatment of all Australians, particularly the most vulnerable, is necessary in improving online safety.
2.143
Furthermore, it is important for all Australians to accept that they have a role to play in improving online safety. While technology companies and government have critical roles to play in managing online safety, the Committee is of the view that behind every harmful action online is a person who has chosen to behave this way. It is not until we fundamentally change what we believe is acceptable online conduct that we can truly address online harm.
2.144
The Committee believes that this message should be utilised in an educational campaign directed at all Australians, focusing on digital citizenship. By encouraging Australians to consider the nature in which they engage and act as responsible citizens in the online world, broader cultural change may be possible, both in the digital space and offline.

Recommendation 3

2.145
The Committee recommends that the eSafety Commissioner undertakes research focusing on how broader cultural change can be achieved in online settings.

Recommendation 4

2.146
Subject to the findings in Recommendation 3, the Committee recommends that the Australian Government establishes an educational and awareness campaign targeted at all Australians, focusing on digital citizenship, civics and respectful online interaction.

Recommendation 5

2.147
The Committee recommends that the eSafety Commissioner examine the extent to which social media companies actively prevent:
recidivism of bad actors,
pile-ons or volumetric attacks, and
harms across multiple platforms.
2.148
The eSafety Commissioner should then provide the Australian Government with options for a regulatory framework, including penalties for repeated failures.
2.149
The Committee is mindful of the significant levels of harms caused to Australians who experience online abuse or other forms of significant harm. The Committee was particularly concerned regarding the most serious harms, such as child exploitation, image-based abuse and technology-facilitated abuse, which significantly impact and traumatise victims. It also noted the evidence of Dr Michael Salter who explained that services in relation to recovery from complex trauma in particular are limited in an Australian context.180
2.150
Online harm leaves a long trail of trauma on its victims and creates a fundamental sense of not feeling safe, both online and offline. This feeling is accentuated by the inescapability of online harm – while harm experienced in the schoolyard, workplace or home is unquestionably traumatic, the evidence received by the Committee indicated that harm experienced online can impact every aspect of a person’s life. Victims feel a sense that their abuser is always with them, regardless of the physical distance between them. Further, the broad audience that online abuse can attract can accentuate the harm experienced.
2.151
It is appropriate and necessary for survivors of online abuse in all forms to access not only a sense of resolution to their experiences, but also treatment and recovery from ongoing trauma. The Committee encourages research bodies to examine ways in how best to treat people who have experienced online abuse. It is also of the view that the healthcare sector should work with bodies such as the eSafety Commissioner, the Department of Infrastructure, Transport, Regional Development and Communications, and the Department of Health in addressing the need for trauma recovery services.
2.152
The Committee’s evidence indicated that many victims attempt to report their experience of abuse to multiple agencies, including police and eSafety. This can potentially retraumatise victims, not only by having to retell and relive their experiences many times over, but also by repeatedly being told that agencies cannot assist in the way that a victim might expect.
2.153
The Committee is of the view that this experience may not always be trauma-informed and could potentially risk victims’ mental health. A single point of entry into a complaints’ reporting scheme, such as through the eSafety Commissioner for example, could reduce the traumatic load on victims when seeking help, and would encourage coordination and awareness between Australian Government agencies and state- or territory-based police forces.

Recommendation 6

2.154
The Committee recommends that the Office of the eSafety Commissioner be provided with adequate appropriations to establish and manage an online single point of entry service for victims of online abuse to report complaints and be directed to the most appropriate reporting venue, dependent on whether their complaints meet the requisite threshold, and in consideration of a variety of audiences such as children, parents/carers, women, people from culturally and linguistically diverse backgrounds, and other relevant vulnerable groups.
2.155
The Committee took particular note of the evidence received in relation to technology-facilitated abuse, which was highlighted by multiple stakeholders – including eSafety – as a critical issue in the online safety environment. The forms of harm that were identified and the all-encompassing effects on the victims of this form of abuse represent a dangerous development in family violence.
2.156
Further work in this space is required to understand how technology-facilitated abuse manifests, the ways in which technology is utilised to coerce and harm others in family violence situations, and the methods in which digital platforms and government agencies can reduce the prevalence of such behaviour.
2.157
The best forum for this work would be a future inquiry conducted by the proposed House Standing Committee in relation to online matters. This inquiry would seek advice from a broad range of stakeholders, including digital platforms, banks, and women’s safety groups, to identify an appropriate course of regulatory reform.
2.158
However, it was clear from the evidence provided to the Committee through submissions and public hearings that this challenge of technology-facilitated abuse is of such a concern and scale that more support is required to support victims through existing Australian Government programs.
2.159
As such, the Committee strongly supports a significant increase in funding for such support, including specialised counselling and support services for victims of technology-facilitated abuse in the next National Action Plan to End Violence Against Women and Children 2022-2032.

Recommendation 7

2.160
The Committee recommends that the Australian Government refer to the proposed House Standing Committee on Internet, Online Safety and Technological Matters, or another committee with relevant focus and expertise, an inquiry into technology-facilitated abuse, with terms of reference including:
The nature and prevalence of technology-facilitated abuse;
Responses from digital platforms and online entities in addressing technology-facilitated abuse, including how platforms can increase the safety of their users; and
How technology-facilitated abuse is regulated at law, including potential models for reform.

Recommendation 8

2.161
The Committee recommends that the Australian Government significantly increase funding to support victims of technology-facilitated abuse, through existing Australian Government-funded programs. This should include additional funding for specialised counselling and support services for victims; and be incorporated in the next National Action Plan to End Violence Against Women and Children 2022-2032.
2.162
Finally, the Committee reiterates the thanks expressed in Chapter 1 to witnesses who provided their experiences of online harm. The Committee could not have developed its understanding of the prevalence, nature and impact of online harm, or have formulated its response to the evidence and its recommendations, without these powerful stories. The Committee commends the courage and resilience displayed by these witnesses.

  • 1
    Ms Christine Morgan, Chief Executive Officer and Prime Minister’s National Suicide Prevention Adviser, National Mental Health Commission (NMHC), Committee Hansard, 21 January 2022, p. 8.
  • 2
    eSafety Commissioner, Submission 53, p. 14.
  • 3
    Ms Christine Morgan, NMHC, Committee Hansard, 21 January 2022, p. 5.
  • 4
    Ms Christine Morgan, NMHC, Committee Hansard, 21 January 2022, pp 6-7.
  • 5
    Professor Amanda Third, Professorial Research Fellow, Institute for Culture and Society, Western Sydney University; Co-Director, Young and Resilient Research Centre, Western Sydney University (Young and Resilient Centre), Committee Hansard, 21 December 2021, p. 26.
  • 6
    Ms Christine Morgan, NMHC, Committee Hansard, 21 January 2022, p. 6.
  • 7
    Ms Sarah Davies, Chief Executive Officer, Alannah and Madeline Foundation (AMF), Committee Hansard, 21 December 2021, p. 23.
  • 8
    Ms Christine Morgan, NMHC, Committee Hansard, 21 January 2022, p. 7.
  • 9
    eSafety Commissioner, Submission 53, p. 26.
  • 10
    eSafety Commissioner, Submission 53, p. 26.
  • 11
    Online Safety Act 2021 (Cth), s 5. A second definition is included specifically in relation to online safety for children in section 6, which replicates the definition in section 5 but adds ‘and includes the protection of Australian children using those services from cyber‑bullying material targeted at an Australian child’.
  • 12
    Ms Kara Hinesley, Director of Public Policy, Australia and New Zealand, Twitter, Committee Hansard, 21 January 2022, p. 20.
  • 13
    The Office of the eSafety Commissioner distinguishes between ‘cyberbullying’, which is online abuse directed at children, and ‘cyber abuse’, which is online abuse directed at adults. This distinction demonstrates that the same conduct directed at children may not affect adults in the same way, and vice versa. It also reflects the different ways and arenas that online abuse may be experienced by adults and children.
  • 14
    Office of the eSafety Commissioner, Adult cyber abuse, available at: https://www.esafety.gov.au/key-issues/adult-cyber-abuse (accessed 28 February 2022).
  • 15
    Online Safety Act 2021 (Cth), s 7.
  • 16
    Office of the eSafety Commissioner, Cyberbullying, available at: https://www.esafety.gov.au/key-issues/cyberbullying (accessed 28 February 2022).
  • 17
    Office of the eSafety Commissioner, Adult cyber abuse, available at: https://www.esafety.gov.au/key-issues/adult-cyber-abuse (accessed 7 March 2022).
  • 18
    eSafety Commissioner, Submission 53, p. 14.
  • 19
    eSafety Commissioner, Submission 53, p. 14.
  • 20
    eSafety Commissioner, Submission 53, p. 21.
  • 21
    WESNET, Submission 25, p. 2.
  • 22
    WESNET, Submission 25, p. 2.
  • 23
    eSafety Commissioner, Submission 53, p. 29.
  • 24
    WESNET, Submission 25, p. 3.
  • 25
    WESNET, Submission 25, p. 4.
  • 26
    WESNET, Submission 25, p. 4.
  • 27
    Ms Padma Raman, Chief Executive Officer, Australia’s National Research Organisation for Women’s Safety (ANROWS), Committee Hansard, 28 January 2022, p. 5.
  • 28
    Dr Bridget Harris, members of The Independent Collective of Survivors, Molly Dragiewicz and Delanie Woodlock, Submission 17, p. 2.
  • 29
    Dr Bridget Harris, members of The Independent Collective of Survivors, Molly Dragiewicz and Delanie Woodlock, Submission 17, p. 8.
  • 30
    Ms Padma Raman, ANROWS, Committee Hansard, 28 January 2022, p. 2.
  • 31
    Ms Karen Bentley, WESNET, Committee Hansard, 28 January 2022, p. 5.
  • 32
    Ms Karen Bentley, WESNET, Committee Hansard, 28 January 2022, pp 7-8.
  • 33
    eSafety Commissioner, Submission 53, p. 20.
  • 34
    Office of the eSafety Commissioner, Deepfake trends and challenges – position statement, 23 January 2022, available at: https://www.esafety.gov.au/industry/tech-trends-and-challenges/deepfakes (accessed 6 March 2022).
  • 35
    Office of the eSafety Commissioner, Deepfake trends and challenges – position statement, 23 January 2022, available at: https://www.esafety.gov.au/industry/tech-trends-and-challenges/deepfakes (accessed 6 March 2022).
  • 36
    Australian Centre to Counter Child Exploitation, What is online child sexual exploitation?, available at: https://www.accce.gov.au/help-and-support/what-is-online-child-exploitation (accessed 28 February 2022).
  • 37
    eSafety Commissioner, Submission 53, p. 23.
  • 38
    eSafety Commissioner, Submission 53, p. 23.
  • 39
    eSafety Commissioner, Submission 53, p. 24.
  • 40
    eSafety Commissioner, Submission 53, p. 26.
  • 41
    Orygen, Submission 27, p. 3.
  • 42
    Eating Disorders Families Australia, Submission 37, p. 1.
  • 43
    Digital Industry Group Inc., Submission 46, p. 5.
  • 44
    eSafety, Online Hate, available at: https://www.esafety.gov.au/young-people/online-hate (accessed 7 March 2022).
  • 45
    eSafety Commissioner, Submission 53, p. 26.
  • 46
    Harmony Alliance, Submission 34, p. 2.
  • 47
    Harmony Alliance, Submission 34, p. 3.
  • 48
    Ms Nyadol Nyuon, Incoming Chair, Harmony Alliance, Committee Hansard, 22 December 2021, p. 8.
  • 49
    Ms Nyadol Nyuon, Harmony Alliance, Committee Hansard, 22 December 2021, p. 7.
  • 50
    Harmony Alliance, Submission 34, pp 2-3.
  • 51
    Harmony Alliance, Submission 34, p. 4.
  • 52
    Harmony Alliance, Submission 34, p. 3.
  • 53
    Harmony Alliance, Submission 34, p. 3.
  • 54
    DIGI, Submission 46, p. 34.
  • 55
    Centre for Digital Wellbeing (CDW), Submission 47, p. 16.
  • 56
    CDW, Submission 47, p. 17.
  • 57
    Cited in Free TV Australia, Submission 42, p. 7.
  • 58
    CDW, Submission 47, p. 15.
  • 59
    Department of Home Affairs (Home Affairs), Submission 40, p. 10.
  • 60
    Home Affairs, Submission 40, p. 11.
  • 61
    CDW, Submission 47, p. 15.
  • 62
    CDW, Submission 47, p. 15.
  • 63
    eSafety Commissioner, Submission 53, p. 21.
  • 64
    eSafety Commissioner, Adults’ negative online experiences – eSafety research, August 2020, available at: <https://www.esafety.gov.au/sites/default/files/2020-07/Adults%27%20negative%20online%20experiences.pdf> (accessed 7 February 2022), p. 4.
  • 65
    eSafety Commissioner, Submission 53, p. 15.
  • 66
    eSafety Commissioner, Submission 53, p. 18.
  • 67
    Twitter, Rules Enforcement – Transparency Report, January to June 2021, 25 January 2022, available at: https://transparency.twitter.com/en/reports/rules-enforcement.html#2021-jan-jun (accessed 23 February 2022).
  • 68
    Meta, Submission 49, p. 16. Meta noted in its submission that this statistic was reflective only of bullying and harassment where Meta did not need additional information to determine if it violated its policies, such as a report from a person experiencing the conduct.
  • 69
    eSafety Commissioner, Submission 53, p. 17.
  • 70
    eSafety Commissioner, Submission 53, p. 17 and 23.
  • 71
    Home Affairs, Submission 40.
  • 72
    Butterfly Foundation, Submission 10; Eating Disorders Families Australia, Submission 37.
  • 73
    Ms Sonya Ryan, Chief Executive Officer and Founder, The Carly Ryan Foundation (CRF), Committee Hansard, 21 December 2021, p. 7.
  • 74
    Professor Amanda Third, Professorial Research Fellow, Young and Resilient Research Centre, Committee Hansard, 21 December 2021, p. 30; Ms Christine Morgan, NMHC, Committee Hansard, 21 January 2022, p. 5.
  • 75
    Ms Sonya Ryan, CRF, Committee Hansard, 21 December 2021, p. 1; eSafety Commissioner, Submission 53, p. 14.
  • 76
    Office of the eSafety Commissioner, Mind the Gap – Parental awareness of children’s exposure to risks online, February 2022, available at: https://www.esafety.gov.au/sites/default/files/2022-02/Mind%20the%20Gap%20%20-%20Parental%20awareness%20of%20children%27s%20exposure%20to%20risks%20online%20-%20FINAL.pdf (accessed 8 February 2022), p. 7.
  • 77
    Office of the eSafety Commissioner, Mind the Gap – Parental awareness of children’s exposure to risks online, February 2022, available at: https://www.esafety.gov.au/sites/default/files/2022-02/Mind%20the%20Gap%20%20-%20Parental%20awareness%20of%20children%27s%20exposure%20to%20risks%20online%20-%20FINAL.pdf (accessed 8 February 2022), p. 7.
  • 78
    Office of the eSafety Commissioner, Mind the Gap – Parental awareness of children’s exposure to risks online, February 2022, available at: https://www.esafety.gov.au/sites/default/files/2022-02/Mind%20the%20Gap%20%20-%20Parental%20awareness%20of%20children%27s%20exposure%20to%20risks%20online%20-%20FINAL.pdf (accessed 8 February 2022), p. 40.
  • 79
    Office of the eSafety Commissioner, Mind the Gap – Parental awareness of children’s exposure to risks online, February 2022, available at: https://www.esafety.gov.au/sites/default/files/2022-02/Mind%20the%20Gap%20%20-%20Parental%20awareness%20of%20children%27s%20exposure%20to%20risks%20online%20-%20FINAL.pdf (accessed 8 February 2022), p. 7.
  • 80
    The Social Switch Project and Dr Faith Gordon, Online Harms Experienced by Child and Young People: ‘Acceptable Use’ and Regulation – Executive Summary, November 2021, available at: https://static1.squarespace.com/static/5d7a0e7cb86e30669b46b052/t/618b7c55a660d050880bb03d/1636531286894/Online+Harms+Research+November+2021+-+Executive+Summary.pdf (accessed 6 February 2022), p. 2.
  • 81
    Ms Kathryn Mandla, Head, Advocacy and Research, yourtown, Committee Hansard, 21 December 2021, p. 32.
  • 82
    Ms Kathryn Mandla, yourtown, Committee Hansard, 21 December 2021, p. 32.
  • 83
    Ms Kathryn Mandla, yourtown, Committee Hansard, 21 December 2021, p. 32.
  • 84
    Ms Sonya Ryan, CRF, Committee Hansard, 21 December 2021, p. 2.
  • 85
    Ms Tracey McAsey, Manager, The Daniel Morcombe Foundation, Committee Hansard, 21 December 2021, p. 10.
  • 86
    Professor Amanda Third, Young and Resilient Research Centre, Committee Hansard, 21 December 2021, p. 26.
  • 87
    Ms Sonya Ryan, CRF, Committee Hansard, 21 December 2021, p. 3.
  • 88
    Office of the eSafety Commissioner, Mind the Gap – Parental awareness of children’s exposure to risks online, February 2022, available at: https://www.esafety.gov.au/sites/default/files/2022-02/Mind%20the%20Gap%20%20-%20Parental%20awareness%20of%20children%27s%20exposure%20to%20risks%20online%20-%20FINAL.pdf (accessed 8 February 2022), p. 9.
  • 89
    Ms Sonya Ryan, CRF, Committee Hansard, 21 December 2021, p. 3.
  • 90
    CDW, Submission 47, pp 10-11.
  • 91
    Body Safety Australia, Submission 59, pp 3-5.
  • 92
    The Synod of Victoria and Tasmania, Uniting Church in Australia, Submission 52, p. 43.
  • 93
    The Synod of Victoria and Tasmania, Uniting Church in Australia, Submission 52, p. 43.
  • 94
    The Synod of Victoria and Tasmania, Uniting Church in Australia, Submission 52, p. 43.
  • 95
    Mr Toby Dagg, Executive Manager, Investigations, Office of the eSafety Commissioner, Committee Hansard, 3 February 2022, p. 15.
  • 96
    Ms Sarah Davies, AMF, Committee Hansard, 21 December 2021, p. 19.
  • 97
    Ms Sarah Davies, AMF, Committee Hansard, 21 December 2021, p. 19.
  • 98
    Body Safety Australia, Submission 59, p. 5.
  • 99
    Body Safety Australia, Submission 59, p. 5.
  • 100
    Dr Michael Salter, Committee Hansard, 18 January 2022, p. 11. The video cited by Dr Salter is available at <https://www.youtube.com/watch?v=O13G5A5w5P0>.
  • 101
    Dr Michael Salter, Committee Hansard, 18 January 2022, p. 11.
  • 102
    Dr Michael Salter, Committee Hansard, 18 January 2022, p. 11.
  • 103
    Daisuke Wakabayashi and Sapna Maheshwari, ‘Advertisers Boycott YouTube After Pedophiles Swarm Comments on Videos of Children’, The New York Times, 20 February 2019, available at: https://www.nytimes.com/2019/02/20/technology/youtube-pedophiles.html?action=click&module=RelatedCoverage&pgtype=Article&region=Footer (accessed 6 February 2022).
  • 104
    Max Fisher and Amanda Taub, ‘On YouTube’s Digital Playground, an Open Gate for Pedophiles’, The New York Times, 3 June 2019, available at: https://www.nytimes.com/2019/06/03/world/americas/youtube-pedophiles.html (accessed 6 February 2022).
  • 105
    eSafety Commissioner, Submission 53, p. 29.
  • 106
    Office of the eSafety Commissioner, Women in The Spotlight: How online abuse impacts women in their working lives, 2021, available at: https://www.esafety.gov.au/research/how-online-abuse-impacts-women-working-lives (accessed 27 January 2022).
  • 107
    Ms Julie Inman Grant, Commissioner, Office of the eSafety Commissioner, Committee Hansard, 3 February 2022, pp 23-24.
  • 108
    Miss Sall Grover, Founder and Chief Executive Officer, Giggle for Girls Pty Ltd (Giggle), Committee Hansard, 28 January 2022, p. 3.
  • 109
    Ms Nicole Shackleton, Submission 28, p. 9.
  • 110
    Miss Sall Grover, Giggle, Committee Hansard, 28 January 2022, p. 4.
  • 111
    Office of the eSafety Commissioner, Women in The Spotlight: How online abuse impacts women in their working lives, 2021, available at: https://www.esafety.gov.au/research/how-online-abuse-impacts-women-working-lives (accessed 27 January 2022).
  • 112
    Office of the eSafety Commissioner, Women in The Spotlight: How online abuse impacts women in their working lives, 2021, available at: https://www.esafety.gov.au/research/how-online-abuse-impacts-women-working-lives (accessed 27 January 2022).
  • 113
    Office of the eSafety Commissioner, Women in The Spotlight: How online abuse impacts women in their working lives, 2021, available at: https://www.esafety.gov.au/research/how-online-abuse-impacts-women-working-lives (accessed 27 January 2022).
  • 114
    Office of the eSafety Commissioner, Women in The Spotlight: How online abuse impacts women in their working lives, 2021, available at: https://www.esafety.gov.au/research/how-online-abuse-impacts-women-working-lives (accessed 27 January 2022).
  • 115
    Ms Julie Inman Grant, Commissioner, Office of the eSafety Commissioner, Committee Hansard, 3 February 2022, p. 24.
  • 116
    Ms Nicole Shackleton, Submission 28, p. 10.
  • 117
    Ms Nicolle Flint MP, Submission 70, pp 7-8.
  • 118
    Ms Nicolle Flint MP, Submission 70, pp 4-6.
  • 119
    Dr Kate Hall, Head of Mental Health and Wellbeing, Australian Football League (AFL), Committee Hansard, 1 February 2022, p. 18.
  • 120
    Dr Kate Hall, AFL, Committee Hansard, 1 February 2022, p. 18.
  • 121
    Ms Erin Molan, Committee Hansard, 18 January 2022, pp 2-3.
  • 122
    Ms Erin Molan, Committee Hansard, 18 January 2022, p. 2.
  • 123
    Ms Tayla Harris, Australian Football League Women’s (AFLW), Committee Hansard, 1 February 2022, p. 20.
  • 124
    Ms Tayla Harris, AFLW, Committee Hansard, 1 February 2022, p. 20.
  • 125
    Ms Tayla Harris, AFLW, Committee Hansard, 1 February 2022, p. 20.
  • 126
    Ms Tayla Harris, AFLW, Committee Hansard, 1 February 2022, p. 20.
  • 127
    eSafety Commissioner, Submission 53, p. 29.
  • 128
    Ms Nyadol Nyuon, Incoming Director, Sir Zelman Cowen Centre, Victoria University; Chair, Harmony Alliance, Committee Hansard, 22 December 2021, p. 7.
  • 129
    Ms Nyadol Nyuon, Harmony Alliance, Committee Hansard, 22 December 2021, p. 7.
  • 130
    Ms Nyadol Nyuon, Harmony Alliance, Committee Hansard, 22 December 2021, p. 7.
  • 131
    Ms Nyadol Nyuon, Harmony Alliance, Committee Hansard, 22 December 2021, p. 8.
  • 132
    Ms Nyadol Nyuon, Harmony Alliance, Committee Hansard, 22 December 2021, p. 8.
  • 133
    eSafety Commissioner, Submission 53, p. 29.
  • 134
    Ms Carly Findlay AO, Committee Hansard, 22 December 2021, p. 1.
  • 135
    Ms Carly Findlay AO, Committee Hansard, 22 December 2021, p. 1.
  • 136
    Ms Carly Findlay AO, Committee Hansard, 22 December 2021, p. 1.
  • 137
    Ms Carly Findlay AO, Committee Hansard, 22 December 2021, p. 1.
  • 138
    Ms Carly Findlay AO, Committee Hansard, 22 December 2021, p. 2 and 6.
  • 139
    Ms Carly Findlay AO, Committee Hansard, 22 December 2021, p. 5.
  • 140
    Ms Carly Findlay AO, Committee Hansard, 22 December 2021, p. 4.
  • 141
    eSafety Commissioner, Technology-facilitated abuse of women with intellectual or cognitive disability, August 2021, available at: https://www.esafety.gov.au/research/technology-facilitated-abuse-women-intellectual-or-cognitive-disability (accessed 7 February 2022).
  • 142
    Ms Julie Inman Grant, eSafety Commissioner, eSafety, Committee Hansard, 3 February 2022, p. 23.
  • 143
    eSafety, Technology-facilitated abuse among Aboriginal and Torres Strait Islander women, August 2021, available at: https://www.esafety.gov.au/research/technology-facilitated-abuse-among-aboriginal-and-torres-strait-islander-women (accessed 7 March 2022).
  • 144
    Mr Chad Wingard, AFL, Committee Hansard, 1 February 2022, pp 24-25.
  • 145
    eSafety Commissioner, Submission 53, p. 29.
  • 146
    Ms Rita Jabri-Markwell, Adviser, Australian Muslim Advocacy Network, Committee Hansard, 1 February 2022, p. 21; Dr Andre Oboler, Chief Executive Officer and Managing Director, Online Hate Prevention Institute; Mr Peter Wertheim, Co-Chief Executive Officer, Executive Council of Australian Jewry, Committee Hansard, 22 December 2021, pp 12-13.
  • 147
    eSafety Commissioner, Submission 53, p. 29.
  • 148
    eSafety Commissioner, Submission 53, p. 27.
  • 149
    eSafety Commissioner, Submission 53, p. 28.
  • 150
    eSafety Commissioner, Submission 53, p. 27.
  • 151
    Ms Nicole Shackleton, Submission 28, p. 38.
  • 152
    Dr Michael Salter, Committee Hansard, 18 January 2022, p. 14.
  • 153
    Dr Michael Salter, Committee Hansard, 18 January 2022, p. 14.
  • 154
    Dr Michael Salter, Committee Hansard, 18 January 2022, p. 14.
  • 155
    Dr Michael Salter, Committee Hansard, 18 January 2022, p. 14.
  • 156
    Dr Michael Salter, Committee Hansard, 18 January 2022, p. 9.
  • 157
    Dr Michael Salter, Committee Hansard, 18 January 2022, pp 14-15.
  • 158
    Dr Kate Hall, AFL, Committee Hansard, 1 February 2022, p. 10.
  • 159
    Dr Kate Hall, AFL, Committee Hansard, 1 February 2022, p. 11.
  • 160
    Mission Australia and The Black Dog Institute, Youth mental health report: Youth Survey 2012-16, 2016, available at: https://www.missionaustralia.com.au/publications/youth-survey/706-five-year-mental-health-youth-report/file (accessed 13 January 2022).
  • 161
    eSafety Commissioner, Submission 53, p. 30.
  • 162
    Professor Amanda Third, Young and Resilient Research Centre, Committee Hansard, 21 December 2021, p. 29.
  • 163
    Professor Amanda Third, Young and Resilient Research Centre, Committee Hansard, 21 December 2021, p. 29.
  • 164
    eSafety Commissioner, Submission 53, p. 30.
  • 165
    Ms Christine Morgan, NMHC, Committee Hansard, 21 January 2022, p. 9.
  • 166
    Ms Christine Morgan, NMHC, Committee Hansard, 21 January 2022, p. 9.
  • 167
    Ms Sonya Ryan, CRF, Committee Hansard, 21 December 2021, p. 2.
  • 168
    Dr Michael Salter, Committee Hansard, 18 January 2022, p. 15.
  • 169
    WESNET, Submission 25, p. 5.
  • 170
    WESNET, Submission 25, p. 5.
  • 171
    Ms Nyadol Nyuon, Harmony Alliance, Committee Hansard, 22 December 2021, p. 8; Ms Erin Molan, Committee Hansard, 18 January 2022, p. 1.
  • 172
    Ms Nicole Shackleton, Submission 28, p. 7.
  • 173
    Ms Nicole Shackleton, Submission 28, p. 7.
  • 174
    Ms Nicole Shackleton, Submission 28, p. 7.
  • 175
    Ms Kate Everett, Founder, Dolly’s Dream, Committee Hansard, 27 January 2022, p. 3.
  • 176
    Ms Deborah Tsorbaris, Chief Executive Officer, Centre for Excellence in Child and Family Welfare, Committee Hansard, 27 January 2022, pp 3-4.
  • 177
    Professor Amanda Third, Young and Resilient Research Centre, Committee Hansard, 21 December 2021, p. 26.
  • 178
    Professor Amanda Third, Young and Resilient Research Centre, Committee Hansard, 21 December 2021, p. 29.
  • 179
    Ms Alana Moller, President, Isolated Children’s Parents’ Association, Committee Hansard, 27 January 2022, p. 6.
  • 180
    Dr Michael Salter, Committee Hansard, 18 January 2022, p. 14.

 |  Contents  |