Social media platforms and other preventative measures
4.1
This chapter examines how cyberbullying could be addressed other than
through criminal offences. It examines the evidence the committee received
regarding:
-
social media platforms, including their policies, procedures and
practices; and
-
education and prevention initiatives.
The policies, procedures and practices of social media platforms
4.2
As discussed in Chapter 2, the committee heard a great deal of evidence
about the concerning prevalence of cyberbullying, and the significant harm it
can cause to victims and those around them.
4.3
Several submitters posited that social media platforms have a role to
play in addressing cyberbullying.[1]
Some submitters also reported that social media platforms sometimes respond to
complaints slowly or inadequately.[2]
For instance, Ms Jenna Price, Committee Member, Women in Media,
explained that:
Sometimes you can complain about something that has happened
to you on social media and it takes days. It depends. If your group has a
strong connection with Facebook in Sydney, you can get help, but that's not
available to everybody and sometimes it's not available even to the people who
already have established that relationship...And then if they don't agree with
you then you have to appeal, and that takes more time, and in the meantime your
image, in whatever version it is, has been plastered all around the internet.[3]
4.4
The committee heard evidence from Facebook, Instagram, and the Digital
Industry Group Incorporated (DIGI). Each of these organisations highlighted that
social media platforms operate under terms of service or 'Community Guidelines'.[4]
DIGI further explained that:
...across the industry, we have:
-
policies that prescribe how old you must be to use our services
-
policies that outline what can and cannot be shared via our
services
-
tools that allow any of the millions of people who use our
services to flag content to us that may violate our policies;
-
we invest in tools that can provide additional protections for
minors, and
-
we invest in a reporting infrastructure that allows us to
promptly review and remove any such content.[5]
4.5
The Law Council of Australia (Law Council) raised a number of issues
with the operation of Facebook's 'Statement of Rights and Responsibilities'
(Statement). These included whether an Australian minor is capable of agreeing
to the Statement, and therefore whether the Statement is legally binding.[6]
4.6
Facebook submitted that '[o]ur content policies have been developed with
the goal of allowing people to expressly themselves freely whilst also ensuring
that people feel safe and respected.'[7]
Ms Nicole Buskiewicz, Managing Director at DIGI, argued:
We have an interest, as an industry, to ensure that the
online space is a safe and respectful place. We want to ensure that people who
are using our services are having a positive experience online.[8]
4.7
The committee also heard that social media platforms are implementing
specific tools to improve safety for users. Ms Julie de Bailliencourt, Head of
Global Safety Outreach at Facebook, stated that '...the way technology is
evolving is exciting and offers a lot of possibilities that will help
complement the notice-and-take-down system...'.[9]
Ms de Bailliencourt provided some examples, including that Facebook checks 80 data
points when a new account is created to identify whether or not it is a fake.
She also stated that:
...we have recently launched, in December, two anti-harassment
tools, one in Messenger and one on Facebook, that will basically leverage the
signal that we get from you. So, if you were blocking somebody on Facebook, and
if we had any indication that this person had created, let's say, a new account
or a similar, duplicate account, with the view to harass you, we have
established with high certainty that we can block those other accounts without
you having to do anything. We call this the super-block.[10]
The role of the eSafety Commissioner
4.8
As discussed in Chapter 1, the eSafety Commissioner has various powers to
address cyberbullying material targeting an Australian child. The eSafety
Commissioner explained that:
[a]s an office, we also work closely with social media sites
to get the cyberbullying material taken down. Thus far, we've had a 100 per
cent compliance rate and so have not had to use our formal powers. And we are
reaching out proactively to a broad range of online services and app providers
to make sure that they're in compliance with the scheme.[11]
4.9
The Office of the eSafety Commissioner (eSafety Office) submitted that
'[o]n balance, the Commissioner considers that the policies, procedures and
practices of the large social media services to address cyberbullying are working.'[12]
However, it also submitted that its role '...is to offer a safety net when a
social media services does not consider a report made to them under their
reporting tool to amount to a breach of their terms of use.'[13]
As the eSafety Commissioner explained:
...it's written into the provisions that the child, or the
parent or guardian must report to the social media sites initially. As Nicole
[Buskiewicz, Managing Director at the Digital Industry Group Incorporated]
said, that's the most expeditious way of getting that down in the first
instance. But if the content doesn't come down within 48 hours, they can come
to us. The role we play as a safety net is that a lot of the moderators,
depending on the platform, may have 30 seconds or a minute to look at the reports
as they come in. They're dealing with huge volumes and they often miss context.
What the young people that report to us can do is give us that context and we
can make a case on their behalf and advocate on their behalf. That's why we
have had success with the 700 cases we've brought down. There have been very
few times when the social media sites say 'You determined that this was serious
cyberbullying, we're going to contest you on this.'[14]
4.10
Ms Buskiewicz highlighted that '...no civil penalties have been levied
under the scheme since it started'. She argued:
This is because the robust and well-established
report-and-take-down systems that members have had in place for over a decade
leading up to the establishment of the eSafety office allow them to effectively
and expeditiously resolve complaints. With or without civil and criminal laws,
we will continue to do this.[15]
4.11
However, the eSafety Office also noted that there is room for
improvement:
Social media services champion community standards, rules and
basic norms of behavior on their platforms. However, sometimes the services
fall short of evolving these policies in response to malfeasance they are
witnessing on their platforms, and in ultimately enforcing these norms. The
Commissioner would like to see better policing of conduct by providers, as a
clear demonstration that they intend to be held to their published policies.
The Office understands that safety is a journey – not a final destination – and
we will continue to work with social media providers to share online abuse
trends and to encourage them greater innovation and investment in safety
protections.[16]
4.12
When asked whether legislative changes might make the eSafety Office
more effective in addressing cyberbullying, the eSafety Commissioner stated
that '[w]e found the act quite workable, and our discretionary powers are quite
broad.'[17]
4.13
The eSafety Office cited some challenges in applying its end user notice
scheme in cases where the perpetrator's identity cannot be established from
public records. It stated that in order to access social media account data
from a platform hosted in the USA, a formal court or treaty process is
generally required. This process is most effective if a request to the service
to preserve the relevant data has already been made. The eSafety Office stated
that there may be merit in the office being able to reach a formal arrangement
with the Australian Federal Police (AFP) in which the AFP makes preservation
requests on behalf of the Office. The eSafety Office could then manage the court
process.[18]
4.14
The Law Council expressed support for the eSafety Commissioner's two‑tier
scheme, but recommended some changes.[19]
First, it recommended that the Tier 2 scheme be expanded to allow small
service providers to be declared as Tier 2. Second, it highlighted that a
social media platform's Tier 1 status can only be revoked (and replaced
with Tier 2 status) if 12 months has passed since it became a
Tier 1 service. The Law Council argued that '[t]his is a long period
in which serious consequences could occur from cyberbullying.' It recommended
that:
...the eSafety Commissioner be given a discretion to remove a
service's 'tier 1' status after a shorter period of time, if the provider
has clearly failed to remove material that has potentially serious consequences.[20]
4.15
The eSafety Office raised the possibility of increasing the basic online
safety requirements under the Enhancing Online Safety Act 2015 (Online
Safety Act) to require '...robust user settings and terms of use, clear and
unequivocal community standards, and a proactive approach to dealing with
cyberbullying on the platform.'[21]
4.16
The Australian Women Against Violence Alliance submitted that the work
of the eSafety Commissioner should be extended to focus not only on
cyberbullying directed at children but also on other groups at risk.[22]
The eSafety Commissioner also stated that this idea may have merit, noting that
the eSafety Office has received a growing number of complaints from adults
since its remit was expanded to include all Australians.[23]
The relevant groups of vulnerable adults could include people with disability,
Aboriginal and Torres Strait Islander people, people who identify as LGBTIQ, women
experiencing domestic violence, and people with a non-English speaking
background.[24]
The eSafety Commissioner noted that any extension of the scheme '...should come
with additional resourcing.'[25]
4.17
Additionally, the eSafety Commissioner expressed concern that the
definitions of 'social media service' and 'relevant electronic service' under
the Online Safety Act are not sufficiently clear and do not adequately capture
gaming platforms or anonymous social interaction apps such as Sarahah.[26]
The eSafety Office stated that it would be useful to amend these definitions so
that the eSafety Office could bring these kinds of platforms into the tier
scheme.[27]
A duty of care for social media platforms
4.18
Mr Josh Bornstein, Principal at Maurice Blackburn Lawyers, supported a
publicly funded regulator to monitor and investigate cyberspace issues and
safety breaches. But he also stated that the regulator would be unable to
manage all cyberbullying cases because '...cyberspace is enormous...'. He proposed:
...empowering individuals, whether they are journalists who are
targeted and trolled or whether they are the parents of children who are
bullied online, to take legal action against Google, against Facebook and
against Twitter for, in effect, breaching their duty of care.[28]
4.19
Mr Bornstein argued that this would:
...provide a very strong financial incentive to the big social
media companies to clean up their act. In the same way that
we provide strong financial incentives to employers and to occupiers of
premises—to supermarkets—to make sure that their premises are safe when people
use them, we should require Facebook, Google and others to take all practicable
and reasonable steps to ensure that their sites are safe for users as well.[29]
4.20
Some other witnesses agreed that this kind of model is at least worthy
of consideration.[30]
Ms Van Badham of the Media, Entertainment & Arts Alliance stated:
I agree with Josh Bornstein's position, that there has to be
a duty of care. Coming from professional media anyway, if a publication, The
Guardian, The Australian, Fairfax, if any of the major media
organisations in this country were facilitating the harassment and abuse of
individuals, they would be held accountable. Social media are media
corporations. Facebook is effectively a modern newspaper. So is Twitter. It has
a pretty loose content policy, but those platforms exist as publication
vehicles, and they must take responsibility for the care of participants within
that.[31]
4.21
However, the submission from DIGI supported a contrary position:
Given the strong commitment of industry to promote the safety
of people when they use our services, we believe that no change[s] to existing
criminal law is required. If anything, we would encourage the Committee to
consider carve outs from liability for responsible intermediaries.[32]
4.22
Additionally, Ms Mia Garlick, Director of Policy, Australia & New
Zealand, Facebook and Instagram, was asked about legal liability for social
media platforms. She stated that '...regulations are clearly a matter for the
government. But from our perspective, regulation isn't what motivates us; it's
the consumer experience.'[33]
Facebook and Instagram also stated that:
[o]n Facebook, people choose who to be friend with, and which
Pages or Groups to follow. Consequently, people make a decision about the types
of content that they can see in their News Feed. News Feed then ranks the
stories based on how relevant a particular piece of content is that a person
has chosen to see. We do not write the posts that people read on our services.
While we are not in the business of picking which issues the
world should read about, we are in the business of connecting people and ideas
— and matching people with the stories they find most meaningful.[34]
4.23
The Law Council clarified that existing positive obligations under the Telecommunications
Act 1997 '...will generally not be applicable to social networking sites, as
they are not carriage service providers.' However, the obligations may apply to
the direct messaging applications of social media platforms, because '...these
messaging applications...may be regulated carriage service providers and hence
caught by the Telecommunications Act obligations.'[35]
4.24
Additionally, the Law Council referred to the civil penalty regime that
already exists under the Online Safety Act and is administered by the eSafety
Commissioner. The Law Council noted that civil penalties have not yet been
applied to social media platforms, and that the eSafety Commissioner reports
largely positive experiences with social media platforms. It submitted that it:
...does not consider that there has been a demonstrated need at
this time to impose a positive obligation by way of a criminal penalty on
social media services to remove cyberbullying content from their platform.[36]
Safety by design
4.25
Safety by design is '...the notion that safety ought to be built-in to
social media services from the outset as a fundamental and core principle of
design.'[37]
The eSafety Commissioner strongly supports this approach, and her office
submitted that:
[t]he Commissioner considers it is reasonable to expect that
large social media services should proactively adopt a 'safety first' approach
to engineering their platforms and features, much as they have already done
with 'security by design' and 'privacy by design'.[38]
4.26
The eSafety Office provided some positive examples of this, including:
-
the Lego Life children's social networking app, which employed
'...trained moderators to enforce an extensive code of conduct for users'; and
-
Snap, Inc. requiring '...users to deliberately opt-in to the Snap
Map feature, rather than opt-out.'[39]
4.27
The eSafety Commissioner also noted a negative example, Facebook Live:
When Facebook Live went live, it took about a dozen murders,
suicides and rapes on Facebook Live for them to say, 'We're going to hire 3,000
moderators,' yet Periscope and Meerkat had been out in the market for some
time, so they could have reasonably anticipated that there would be some safety
issues requiring moderation.[40]
4.28
The Alannah & Madeline Foundation submitted that '...the
"start-up" (innovation) culture of the technology industry
prioritises "testing in the marketplace" and responding to user
feedback to improve their services – this takes precedence over "user
safety by design".'[41]
Further, new start‑up platforms:
...can often have significant cyberbullying and harassment
issues, due to their lack of monitoring and reporting processes. Young people
are often the "play-testers" in this environment, as their age group
has a higher proportion of "early adopters".[42]
4.29
The eSafety Commissioner stated that one role her office would like to
play is to '...encourage companies to put safety by design first' and to
'...develop and implement stronger policies and enforcement procedures.'[43]
She also noted the difficulties of legislating on safety by design:
I think that would be very hard to implement. Technology is
always going to outpace public policy. I imagine any legislator would have a
hard time anticipating where the newest technology might be or where it might
go, and I don't think we want to stifle innovation.[44]
4.30
However, as the eSafety Commissioner stated:
...if [social media platforms are] not responsive or don't
acquiesce and they're active in our market and young people are being abused on
them, that's when we can go to the minister's office and declare them a tier-2 player.[45]
Social media platforms and data
4.31
Some submitters stated that it may be beneficial for social media
platforms to publish relevant data, including data about complaints received
and the platforms' responses to them.[46]
The National Children's Commissioner at the Australian Human Rights Commission
stated that reporting on data:
...provides the social media providers an opportunity to
enhance their education by demonstrating what they're doing in this space. If
they've got a good story to tell, I think they should be telling it.[47]
4.32
Ms Buskiewicz of DIGI said that the data on complaints vary, and stated:
The member company policies vary on whether they release
those numbers. The example I can give is YouTube, which receives 275,000 flags
a day for review across all types of content, and that is in the context of
having 400 hours of video content uploaded to YouTube every day. So there is a
real volume of content that goes up.[48]
4.33
Ms Buskiewicz stated that she did not have this data for Australia only.[49]
4.34
In answering questions on notice, Facebook and Instagram stated that '[w]e
understand the rationale behind your requests for us to provide more detail
around the data showing reporting trends, however, unfortunately at this stage,
we are not able to do so.'[50]
However, Facebook and Instagram also highlighted methods of removing content
before users report it to them. For instance, they stated that:
...we use automation, image matching and other tools to
proactively identify and remove 99% of the terror-related content before anyone
in our community has flagged it to us, and in some cases, before it goes live
on the site.[51]
4.35
Ms de Bailliencourt of Facebook also said that complaint wait times
vary, but '[t]he vast majority...' are reviewed within 24 hours, and '[s]ome
may go to 48 hours.' She explained that '[w]e try to go even faster on
very sensitive reports, such as bullying. Suicide prevention is the one we try
to get to in minutes—when we're very good.'[52]
4.36
The eSafety Office told the committee that between
1 October 2017 and 31 January 2018, the '...average length of
time between the Office requesting removal of content from a social media
service, to the Office being informed that the material has been removed, was
39 hours.' It stated that '[i]n the majority of cases, material will have been
removed well before notification.' In addition, '[t]he fastest time for content
removal by a social media service following a request by the Office was
26 minutes.'[53]
4.37
Facebook and Instagram informed the committee that:
[w]e now have around 14,000 people working across community
operations, online operations, and our security efforts. We are committed to
increasing this number across all of these teams to a total of 20,000 by 2018.[54]
4.38
The eSafety Commissioner stated that, based on her industry experience,
social media moderators have a very short time to consider complaints:
It is 30 seconds to a minute. It may vary. You would have to
verify that with Facebook...Most of the social media sites have triaging
functions. So, depending on which boxes you tick, they'll be able to
determine—if it's image based abuse it may go to one queue, versus child sexual
exploitation versus bullying, or 'this comment was inappropriate'.[55]
4.39
The eSafety Office submitted that from 1 October 2018 to
31 January 2018 it responded to 97 percent of all complaints about
cyberbullying within three hours and resolved complaints, on average, in 150
minutes.[56]
4.40
The social media representatives at the hearing on
9 February 2018 were asked whether they would object to a law requiring
platforms to publish data about complaints, broken down by category. Ms
Buskiewicz of DIGI said:
We need to ask: what are we trying to get from the numbers?
If we're talking about incentivisation, that's only one way. As Mia [Garlick of
Facebook and Instagram] said before, we are very much reliant on feedback and
we're continually striving to do better, and we will do that regardless of
whether we have to publish numbers.[57]
4.41
Ms Garlick of Facebook and Instagram argued that published data '...might
not be clear as to what it's showing', and added:
It's up to you guys to make recommendations and decisions on
the law. From our perspective, that's not going to be what motivates us to make
sure we're doing the best we can to remove the content as fast as we can.[58]
4.42
The eSafety Office stated that the publication of this kind of data
would be beneficial, but also acknowledged that the data's usefulness would be
limited:
Data about cyberbullying and other abuses collected by the
social media services would be useful to have. For example, the information
might be used to target resources, raise awareness, and provide education on
specific issues.
However, it is unclear how much weight we could place on this
type of information. There are two reasons for this. The first is that
user-flagging of objectionable material on a service will always rely on the
specific rules, guidelines or standards applicable to that service, rather than
the statutory thresholds employed by the eSafety Commissioner. The second is
that the data, being user-generated and unverified, may not reliably reflect
the actual incidence of cyberbullying on a platform.[59]
Education and prevention
4.43
A large number of submitters and witnesses, including government
agencies responsible for children and education, emphasised the importance of
education in addressing cyberbullying.[60]
As the Australian Government Department of Education and Training (Department
of Education) submitted:
In dealing with cyberbullying, the department supports a
whole-school, systemic approach that emphasises early intervention and provides
tiered levels of support for school children and young people affected by the
negative behaviour. Measures should be age-appropriate and child focused,
working with the person being targeted, their family and school, social media
services, the perpetrator and when appropriate the police to address the issue.[61]
4.44
The Department of Education also provided details on how the Australian
Curriculum addresses cyber safety and security both explicitly and implicitly
from Foundation to Year 10.[62]
4.45
However, the eSafety Commissioner stated that '...there isn't consistent
and comprehensive online safety education. Some schools do it really well; some
schools totally miss the boat.' She explained that the eSafety Office has:
...a certified online safety provider program, where I believe
there are about 127 presenters from 27 different organisations—everyone from
the Carly Ryan Foundation and the Alannah & Madeline Foundation to PROJECT
ROCKIT.[63]
4.46
Additionally, Ms Lesley Podesta, Chief Executive Officer at the Alannah
& Madeline Foundation, posited that:
[o]verwhelmingly—and this distresses me so much—it is a
postcode lottery as to whether the teacher knows what to do. It's not because
they don't care; it's because they have absolutely no resources or support
about the most effective pathways to deal with it.[64]
4.47
The committee heard evidence regarding many existing initiatives and
organisations offering education and support related to cyberbullying, including:
-
Carly Ryan Foundation;[65]
-
eSmart schools and eSmart libraries;[66]
-
Kids Helpline;[67]
-
Out of the Dark;[68]
-
PROJECT ROCKIT;[69]
-
ReachOut Australia;[70]
-
Student Wellbeing Hub;[71]
-
ThinkUKnow;[72]
-
various initiatives cited by the Tasmanian Government,[73]
and
-
various initiatives cited by the Western Australia Department of
Education.[74]
4.48
yourtown noted that children should not be the only focus of education
initiatives:
As with addressing other cyber safety concerns that confront
our children and young people on a daily basis, such as sexting and
pornography, government must recognise the importance, impact and potential
value of the behaviour and responses of not just cyberbullying victims and
perpetrators but also of bystanders, parents, teachers and wider support
services.[75]
4.49
Indeed, the Queensland Family and Child Commission submitted,
'[e]ducation initiatives must target adults as well as children and young
people.'[76]
The Australian Human Rights Commission also stated that '...parents are a
critical target group for public awareness and support for children as they
navigate online spaces...'.[77]
4.50
The committee heard that one important point for education and awareness
initiatives is to encourage help‑seeking behaviour. As Professor Barbara
Spears of the Australian Universities' Anti-bullying Research Alliance stated:
There's a stigma attached to seeking help. It means: 'I'm
weak.' It means: 'I can't fix it myself.' We need to remember that young
adolescents are of the age where they are trying to develop and identify as
young, autonomous adults, and so they want to be seen to be solving problems
themselves. So we have to give them the skills. We have to help them understand
what help seeking means and how to go about looking for help and coping with
bullying.[78]
4.51
The eSafety Commissioner cited her office's research, which '...tells us
that young people are much less likely to use formal channels to seek support.'
She stated that:
[o]nly 50 per cent of young people turn to the family for
assistance, around 13 per cent will involve their school and only 12 per cent
report to a social media website. Fewer still, two per cent, report to the
police.[79]
Navigation: Previous Page | Contents | Next Page