Chapter 5 - Technology and child exploitation

Chapter 5Technology and child exploitation

5.1The technology industry, including technology providers and social media platforms, has a key role to play in combatting online child abuse material (CAM). Though there have been improvements in recent years, the Office of the eSafety Commissioner (eSafety) submitted 'there is still much work to do', and observed:

…it is worth emphasising how critical a partner industry is in counter-CSEM [child sexual exploitation material] efforts. The modern Internet – its wires, hardware, data centres, and cabling – is almost entirely owned and operated by private concerns. That means that efforts to harden the online world against abuse by those producing and distributing CSEM will only be effective with sustained and systemic buyin from the network operators, domain registrars, Internet address registries, domain administrators, hosting service providers, enterprise cloud providers and others. This requires sustained cross-jurisdictional efforts and consistency of regulation, globally.[1]

5.2Building on the discussion in chapter 2 about how technology facilitates child exploitation, this chapter more closely examines the role of technology providers and the response of law enforcement in this area. The chapter examines key issues raised in evidence as follows:

Action by technology companies to combat child exploitation and CAM.

The age of social media users and potential use of age verification.

The implications of encryption, especially endtoend encryption.

Cooperation of technology providers with law enforcement, including the mutual legal assistance process and other interactions.

Potential additional obligations for technology providers, including technology company staff based in Australia.

Regulatory and other action taken by eSafety, particularly under the Online Safety Act 2021 (Online Safety Act).

5.3These topics were a key focus of the inquiry during the 47th Parliament, which is reflected in the detail and length of this chapter.

Current action by technology companies

5.4As discussed in chapter 2, technological developments have amplified child exploitation crimes. This section reviews key evidence about some of the measures taken by technology companies to address misuse of their platforms.

5.5Speaking about the industry broadly, eSafety submitted:

Most mainstream services have policies, rules, terms of use or community standards prohibiting child sexual exploitation and abuse on their platforms. When they become aware of such content, mainstream services which are subject to US federal law typically remove it, disable the relevant account, and report it to NCMEC [the National Center for Missing and Exploited Children].[2]

5.6eSafety advised that reports by online services constitute the vast majority of CAM reports to NCMEC; of 29.1 million reports in 2021, only 0.8percent came from members of the public. eSafety explained that most online services check for CAM 'using well-established photo matching technologies' and, in addition:

Services can also detect and action CSEM through Trust and Safety teams and automated tools. Some of this work is proactive, such as scanning content for potential CSEM at upload, and some is reactive, such as providing reporting mechanisms for users to notify potential CSEM to the service.[3]

5.7However, eSafety said services vary in the effectiveness of their measures to detect and respond to CAM, their level of investment, innovation and collaboration, and the transparency of their reporting.[4] The eSafety Commissioner has observed in an opinion piece that:

…in 2022, NCMEC received around 32 million reports of child abuse material from tech companies. This was mostly from Meta, owner of Facebook, WhatsApp and Instagram, which reported around 27 million instances of online child abuse.

But juxtapose this with other tech behemoths like Apple, and you start to see the problem.

In the same year that Meta reported its 27 million child abuse images, Apple, with its billions of handsets, hugely popular iMessage service and iCloud file and photo storage services, reported just 234.[5]

5.8Deputy Director at the Australian Institute of Criminology, Dr Rick Brown, advised that there are just five platforms that account for 97 per cent of the 29million reports to NCMEC in 2021, namely Facebook, Google, Instagram, Snapchat and WhatsApp.[6]

5.9eSafety welcomed initiatives by major industry platforms which it said have 'had a tangible impact on the ability of offenders to find, share and store CSEM online'. eSafety gave several examples, including:

Deterrence messaging that Google presents to users in many countries (including Australia) who attempt to search for CAM.

Google's Content Safety API, which is 'an artificial intelligence classifier for CSEM' that Google provides to customers for free. It is 'intended to help organisations scale and prioritise decisions around [removing content]'.

PhotoDNA, a technology developed in 2009 by Microsoft and Dartmouth College. PhotoDNA is 'a "hashing" technology able to convert images into a unique signature', which can then be used to find similar images. It is used widely by industry and NGOs to detect and remove known CAM.

Other hashing technologies, including two developed and offered for free by Facebook (now Meta), known as PDQ and TMK+PDQF.

Project Artemis, which is an anti-grooming tool that 'helps with moderation of high-risk conversations on platforms that flag potential grooming efforts'. It was developed by Microsoft in collaboration with The Meet Group, Roblox, Kik and Thorn, and is 'made freely available by Thorn to qualified organisations that offer a chat function as part of their service'.

Tools proposed by Apple to 'warn children and their parents when receiving or sending messages containing nudity' and to 'provide warnings and information to those who attempt to search for CSEM using Apple services'.[7]

5.10While scanning tools can be useful, there may also be shortcomings. For instance, AssociateProfessorBenoitLeclerc, AssociateProfessorJesseCale and ProfessorThomasHolt submitted:

The inherent benefit of such software is limited by the fact that they may only be used at specific times. For instance, some cloud storage providers indicate that they implement Microsoft's PhotoDNA tool to scan for CEM [child exploitation material]. Recent reporting indicates that the tool is only implemented, however, when content is directly distributed from a user account which limits its ability to be used for proactive/preventative scanning. Others, such as Amazon do not appear to implement these tools at any time. On the one hand, minimizing intrusive scanning is necessary to ensure user privacy and minimize the risk of false positive identification. On the other, these policies enable CEM content to be stored on their infrastructure without detection so long as the user does not proactively share content with others.[8]

5.11All technology companies that gave evidence to the committee opposed the use of their platforms for CAM.[9] For example, Google described itself as 'one of the leaders in fighting CSAM [child sexual abuse material] and our goal is to make sure that we are not part of the supply chain for this content'.[10] TikTok submitted that it designed its platform, features and processes 'with the safety of minors front of mind',[11] and Meta recognised that 'there is a continuous responsibility for all stakeholders - government, industry, and the broader community - to work together to protect children'.[12]

5.12Technology providers also highlighted their use of detection tools; for instance, Ms Emily Cashman Kirstein of Google told the committee:

We rely on two equally important technologies when we're doing this detection. One is hash matching; previously identified CSAM content would be flagged by our systems using that technology. They have assigned digital signatures, and if those signatures match known sets of CSAM, that would trigger a match.

We also use AI [artificial intelligence] and machine learning to detect content that may be new or to help sift through very large amounts of data to prioritise what is most likely to be CSAM for review. On the AI piece, when our AI tools detect that content, we do ensure that it goes through a confirmation process that includes human review to verify that it does in fact contain CSAM before further action is taken.[13]

5.13Meta highlighted that its PDQ and TMK+PDQF tools are used by the Australian Federal Police (AFP) 'as part of their work to protect children within Australia'.[14] Meta's Regional Director of Public Policy, Ms Mia Garlick, reported that 99.5percent of the CAM Meta detected on its platforms was 'detected and removed proactively before anyone had to see it and report it to us'.[15] Meta also reported some findings from its analysis about the reasons people share CAM on its platforms. As well as those with 'malicious intent towards children', it found:

…people may share CSAM with nonmalicious intent (for example, out of shock or outrage, out of ignorance, in poor humour [eg. someone sharing an image of a child's genitals being bitten by an animal], or children sending sexual imagery of themselves to another child). While our work to understand intent is still ongoing, our initial estimates suggest that 75percent of CSAM sharing on our services is due to people sharing it with non-malicious intent.[16]

5.14TikTok uses other platforms' detection tools, such as Google's Content Safety API and Microsoft's PhotoDNA, and also applies an 'automated moderation process' to livestreams, videos and images uploaded to its platform. Under this process, 'our systems work to detect and remove violations of our Community Guidelines, including Child Sexual Abuse Material'. TikTok said:

Approximately 98.5% of proactively detected violations of all minor safety policies (not exclusive to CSAM) are removed before they're reported by users and 93.9% have zero views, meaning none of our community saw the content before it was removed.[17]

5.15Twitch suggested that tools to identify known CAM, such as hash matching, are not effective on a livestreaming service like Twitch 'where the vast majority of content is new, live, and ephemeral (as opposed to pre-recorded and uploaded)'. It submitted:

We must therefore take a multi-pronged approach, invent our own solutions, and adapt existing, third-party solutions to tackle unknown CSAM and grooming. For example, we utilize technologies to scan livestreams for possible nudity, which is not allowed on our service and, taken together with other signals, can also be an indicator of potential CSAM. We also apply various machine learning models to identify potential CSEA [child sexual exploitation and abuse] content and grooming in text.[18]

5.16Some platforms provided data about the accounts they have suspended for CAMrelated activities. For example, Ms Lucinda Longcroft of Google advised that, in 2022 across Google's platforms, it 'disabled over 600,000 accounts for possessing CSAM'.[19] Ms Kathleen Reen, representing X (Twitter), reported that 2.5 million accounts were suspended in April 2023 'for sharing or engaging with child sexually abusive media', an increase from 160,000 accounts in May 2022.[20]

5.17A further measure reported by some platforms is to restrict search terms associated with CAM. For instance, Twitch submitted that it 'block[s] the ability to use certain search terms or phrases to find content on Twitch to reduce the ability of predators to locate channels that could be susceptible to grooming'.[21] Meta submitted that it presents a pop-up to people who search for terms associated with child exploitation, which 'offers ways to get help from offender diversion organisations and shares information about the consequences of viewing illegal content'.[22] Regarding Google, Ms Longcroft submitted:

It's our policy to deindex search results that lead to CSAM or material that appears to sexually victimise, endanger or exploit children. Additionally, if someone uses a search term that makes our algorithms think they're looking for child abuse content, we filter out explicit associations and display warnings that CSAM is illegal and provide information to report CSAM. Here in Australia, we display information to report CSAM to the eSafety Commissioner.[23]

5.18Some technology platforms also pointed to their investment in combatting CAM. For example, MsGarlick advised that Meta has 'more than 40,000 people working on safety and security' and has 'invested more than A$23 billion on safety and security in the last seven years'.[24] TikTok advised that it has over 40,000 safety professionals and it 'invest[s] heavily in a robust Trust and Safety department, with locations in Australia and across the world operating 24 hours a day, 7 days a week, across dozens of languages'.[25] When asked about its staffing arrangements, TikTok clarified:

Within our global Trust & Safety workforce we employ specialist staff with regional or countryspecific expertise, who work alongside specialists with subject matter expertise who may have responsibilities spanning multiple markets, including Australia (e.g. expertise in minor safety, illegal activities and regulated goods). TikTok employs approximately 400 staff with dedicated responsibility for the Australian market, most of whom work as part of larger teams situated in our APAC Trust & Safety hub in Singapore. We have 4 Trust & Safety team members based in Australia.[26]

5.19Regarding the risks of generative artificial intelligence, including the disturbing trend of computer-generated CAM, Ms Cashman Kirstein of Google stressed that no CAM is allowed on Google's platforms regardless of how it is produced:

We have…a long track record of combating child abuse and exploitation online, and our approach to generative AI is no different. We do not allow CSAM on our platforms, and that includes generative AI CSAM. We've built safeguards into Google's AI products to detect and prevent results that would lead to that, and our Generative AI Prohibited Use Policy makes it clear that users may not use our generative AI tools to generate any kind of sexually explicit content, including CSAM illegal content.[27]

5.20Ms Cashman Kirstein added:

I would say that we can speak to our services and the work that we're doing to combat generative AI CSAM. We have protections in place to mitigate risks that our model, our generative AI tools, would be misused to generate exploitative content related to children, including child safety testing and things of that nature. It is absolutely something that we are aware of and building safeguards for.[28]

5.21The committee also received evidence about industry collaborations, such as the Technology Coalition, which is 'a global alliance of 24 leading technology companies united to protect children from online sexual exploitation and abuse'. The Technology Coalition submitted that it had recently organised several events drawing together key stakeholders, including law enforcement, for discussion about responding to online CAM. It also highlighted that Technology Coalition member companies provide 98percent of all reports to NCMEC's CyperTipline.[29]

5.22Several submitters pointed to the value of such collaborative efforts.[30] For instance, Google submitted:

We know that no one company or platform can do it alone when it comes to protecting children. That is why Google has significantly contributed to the Technology Coalition, building shared knowledge, funding research and developing cutting edge technology and coming together with others -government, educators, parents, law enforcement - to protect children on our platforms and across the Internet.[31]

5.23In a joint submission, the Communications Alliance & Australian Mobile Telecommunications Association confirmed that 'several carriers block access to the INTERPOL "Worst of" list of domains that disseminate the most severe child abuse material worldwide'.[32]

The age of social media users

5.24Several platforms that gave evidence in this inquiry require their users to be at least 13 years of age.[33] They may also apply certain restrictions to users who are under 18. For instance, TikTok users aged 13 to 15 'have no option' to activate direct messaging, and for those aged 16 to 17, 'the messaging feature is set to private by default, and messages received from non-friends are shown in a separate inbox'.[34] For users of X aged 13 to 18, Mr Nick Pickles, Head of Global Government Affairs, said 'we default you to our maximum safety settings, so there are no direct messages and it's a private account'.[35] Twitch reported that it 'updated the default privacy settings for our direct messaging feature to not allow messages from strangers for all new users and existing users under 18'. In addition, Twitch's terms of service require that users 'between the ages of 13 and the local age of legal majority may only use Twitch under the supervision of a parent or legal guardian'.[36]

5.25Noting that the data are not directly comparable, there was variation in the number of accounts removed by platforms because its user did not meet the age requirement. Ms Reen advised that X removed 7,000 Australian users on that basis between January 2022 and January2023.[37] Meta said that in the third quarter of 2021 it removed more than 2.6 million Facebook accounts and 850,000 Instagram accounts globally.[38] TikTok reported that of its more than 1billion active monthly users, it removed approximately:

16.9 million accounts in the first quarter of 2023;

17.9million accounts in the fourth quarter of 2022;

19.7 million accounts in the third quarter of 2022; and

20.6 million accounts in the second quarter of 2022.[39]

5.26Platforms described some of the measures deployed to ensure that users are the age they claim to be. For instance, TikTok submitted that it is 'fully committed to enforcing our age policies and deploys a range of mechanisms and safeguards to ensure users have an age-appropriate experience'. These include:

Setting the minimum age of our App in the Apple App Store as 12+ and Google Play Store at "Parental Guidance Recommended" so that parents can simply block their children from downloading TikTok in the first place.

Requiring users to declare their age via a neutral age gate that does not provide any signal to users of the minimum required age. If they fail the age gate, we do not specifically tell them this is because they are not old enough.

Allowing users to report an account when they think a user is underage.

Training our moderators to flag any suspected underage accounts with a specialist team who will make an assessment about whether a user is likely to be in violation of our age policy. As the figures in our enforcement report show, we are removing these users aggressively, and at notable scale.[40]

5.27TikTok also said, regarding users whose declared age may not be accurate, that it may 'infer age-range based on information such as user activity on TikTok'.[41]

5.28Meta submitted that it accepts user reports and trains its content moderators to flag accounts that appear to be used by an underage person. In addition, Meta 'has been investing in artificial intelligence tools to help us understand someone's real age, and we've developed technology that allows us to estimate people's ages, like if someone is below or above 18'. It explained:

We determine a user's age by training our technologies to read multiple signals. In August 2021 we announced that we'll look at things like people wishing friends a happy birthday and the age written in those posts: for example, "Happy 21st Birthday!". We also look at the age users have shared across apps: for example, if a user has shared their birthday on Facebook, we'll use the same for linked accounts on Instagram. We also use this technology to find and remove accounts belonging to people under the age of 13.[42]

5.29Meta has also been trialling new ways for some users to verify their age on Instagram. In June 2022, it announced '[i]f someone attempts to edit their date of birth on Instagram from under the age of 18 to 18 or over, we'll require them to verify their age using one of three options: upload their ID, record a video selfie or ask mutual friends to verify their age'. The video selfie option uses facial age estimation. This trial was rolled out in Australia in March2023.[43]

5.30Twitch referred to various detection measures on its platform, including 'automated solutions that use behavioral and language cues to identify users who are likely under 13'. In addition:

To prevent any users under 13 who made accounts with false age information from going live and putting themselves at risk before we're able to identify and remove their account, we have introduced mandatory phone verification requirements before potentially vulnerable accounts are able to livestream via mobile. We do this based on content categories that are often streamed or viewed by a younger audience.[44]

5.31Mr Pickles, representing X, said that users are asked to disclose their age:

It's the industry standard. We ask people to tell us their age, and our evidence shows that the majority of young people are using other services. But I'm not going to sugarcoat it and say there is an easy solution to age verification. It's one of the biggest policy challenges the entire technology industry faces. I know there are consultations in Australia right now, and many other countries, trying to figure out how to do this, because it is really important. But, right now, we and a number of our services ask people to declare their age when they use the service. If we think they've told us before and they're changing their number, we will suspend those accounts. So we do look for people coming back and trying to change their age.[45]

5.32When asked about age verification systems, TikTok's response included:

TikTok acknowledges existing industry-standard age verification systems have their limitations. We are committed to working with industry peers, regulators, and key stakeholders to find industry-wide solutions that further improve methods to detect and remove underage users, while also respecting user privacy and adhering to the Australian Privacy Principles.[46]

5.33A specific recommendation on this matter was made by the Uniting Church in Australia, Synod of Victoria and Tasmania (Uniting Church Synod), which submitted that '[i]n terms of product design, there is a lack of safeguards for children using the products and a lack of corporations enforcing their own policies'. In recommending that social media corporations be required to not allow children under the age of 13 to open accounts on their platforms without verified parental or guardian consent, the Uniting Church Synod submitted:

Facebook and Instagram could enforce the age limit policy more effectively, but choose not to. It is reasonable to assume that this is because of the cost that would be involved. When a child opens a Facebook account, they usually start to post photographs of themselves and their friends, who are generally of similar age. They go on to post comments about school, classmates and their activities. A scan of Facebook pages would quickly and easily pick up many of the pages opened by children. The lack of identity verification also has meant that child sexual abuse perpetrators can set up multiple Facebook accounts, pretending to be children themselves. These profiles are then used for activities like grooming and sexual extortion, with a vast pool of potential victims to prey on.[47]

5.34Safe on Social raised concerns about children being exposed to predators in online games such as Roblox. It submitted that half of the game's 43million daily users are under 13 years old, and said:

Despite the current recommended age rating for Roblox being 12+ on both the Apple store and Google Play stores, the game is downloadable on all smartphones, tablets, devices, desktop computers, Xbox, and Nintendo Switch. There is no age verification beyond the age recommendation guideline of 13+ in the Terms and Conditions of use.[48]

5.35Collective Shout expressed concern about children being exposed to online pornography and submitted that it has 'long highlighted the links between pornography and the normalisation of child sexual abuse'. It recommended that the federal government 'fast-track an age-verification system' and noted legislative developments on this topic in Canada and Germany. Collective Shout advanced:

Data indicates that there is growing interest in CSAM. It is driven by exposure to extreme porn, and may be triggered by accidental exposure to it, more powerful even than pathological motivations or drivers such as sexual urges (although convicted CSAM offenders do exhibit high rates of paedophilic interests). Child abuse expert Michael Sheath believes he is seeing "a dangerous cultural shift in the profile of offenders, brought about by the enormous change that increasingly extreme pornography is having on the developing teenage mind." Almost half of the 3,035 offenders in the criminal justice system for possessing CSAM (in Queensland, 2018) were themselves children under the age of 17.[49]

5.36Dr Brown of the Australian Institute of Criminology advised that there is 'quite a body of research that shows the escalation from adult pornography into child sexual abuse material', and noted that the Institute is undertaking further research on this matter. He submitted:

Age verification is an interesting issue from a number of perspectives. One is the one that's being pursued around adult pornography and blocking access to those who are underage, to minors. There's the flipside as well, which is age verification for users of services that are aimed at adults but with children using those as well. So I think age verification plays a number of roles in dealing with this particular issue.[50]

5.37Recent media reported that the eSafety Commissioner, Ms Inman Grant, is concerned about children bypassing age verification online. Ms Inman Grant reportedly highlighted that 'young people are being groomed and coerced into sexual acts that are recorded on smart devices, on webcams remotely'. While stating that '[p]arents need to be involved in their kids' online lives the way we are their everyday lives', she also spoke about the responsibilities of technology companies; the report said:

While many platforms restrict what features users can access based on age, it is not hard to just put in an earlier birthdate.

"Self-declaration makes it too easy to lie," she [Ms Inman Grant] said.

"And not only do children prevaricate about their age, but adults can create fake or imposter accounts and pretend to be a child.

"We do need more rigorous age verification, but I'd say verification technologies overall."[51]

5.38Following a recommendation of the House of Representatives Standing Committee on Social Policy and Legal Affairs, eSafety recently produced a Roadmap for age verification and complementary measures to prevent and mitigate harms to children from online pornography.[52] Though the roadmap is focused on access to online pornography, it also acknowledges that 'the ability of online service providers to ascertain the age of their users is essential to keeping children safe from a wider spectrum of risks and harms beyond pornography'.[53]

5.39The roadmap was provided to government in March 2023, and the government responded in August 2023.[54]

5.40Among eSafety's recommendations was a pilot of age assurance technologies.[55] In its response, the government highlighted the process for industry codes under the Online Safety Act 2021. The first phase of industry codes, which is underway, deals with 'class 1' content (which is content that would likely be refused classification in Australia, including CAM). The second phase will relate to 'class 2' content (which is content that is legal but not appropriate for children). The Minister for Communications has asked the eSafety Commissioner to commence work on the second phase 'as soon as practicable, following the completion of the first tranche of codes'. The government will 'await the outcomes of the class 2 industry codes process before deciding on a potential trial of age assurance technologies'.[56]

5.41The government's response also highlights various other measures, including that it will monitor developments on age assurance requirements in other jurisdictions. The matter is to be considered in a review of the Online Safety Act2021; the government announced the review will commence in early 2024.[57]

Encryption

5.42eSafety explained what encryption is and how it is used:

Digital encryption is not new and, in its modern form, has been used for more than 40 years as an essential tool for privacy and security. It is primarily employed to keep data and transactions secure and to prevent data breaches and hacking. It allows legitimate, positive and safe communication where this may not otherwise be possible, and is used to protect valuable information such as passport credentials.

However, encryption can also assist in serious harms by hiding or enabling criminal activities, including online child sexual abuse.[58]

5.43One form of encryption is end-to-end encryption, which is 'a method of secure communication that allows only the people communicating with each other to read the messages, images or files being exchanged'. Examples of services using end-to-end encryption are WhatsApp, Signal, Skype and Telegram.[59] This differs from other forms of encryption that allow the technology company to access the content.[60]

5.44Technology platforms apply end-to-end encryption in differing ways and to various degrees. For instance, Meta has announced plans to apply endtoend encryption to Facebook Messenger and Instagram Messenger in 2023.[61] Google applies end-to-end encryption on some services, and advised, in August 2021, that it 'has begun a phased roll out of [end-to-end encryption] for one-to-one conversations solely between Android Messages users'.[62] Regarding X, MrPickles advised that there has been 'a limited rollout of end-to-end encryption in direct messages', though it is 'not currently available to every user' and 'our intent is to ensure that for accounts that show risk signals…we would not allow those accounts to use end-to-end encryption'.[63] As for TikTok's use of end-to-end encryption, Ms Woods-Joyce advised:

TikTok's messages are encrypted at rest and while they're in transit, but we don't have an end-to-end encrypted message like you may see in another service format, so there is some moderation that's able to happen within our messages. Those users that are aged 13 to 15, as we discussed before, have direct messages turned off altogether. Where any user might have received content through their message facility, it is still subject to our community guidelines. It can be reported and moderated. As well as that, where there is a valid legal request, the content of those messages can be accessed by law enforcement when that structure is in place.[64]

5.45Some inquiry participants highlighted the benefits of encryption.[65] For instance, Google described it as 'a critically important tool in protecting individuals, corporations, governments and agencies from a broad range of security threats'.[66] Meta said that end-to-end encryption has become 'the global security standard for many online services', and observed that '[a]ll the top ten messaging services in Australia (such as Apple's iMessage and Signal) offer endto-end encrypted services'.[67]

5.46The committee also heard that many ordinary users value strong privacy protections; for instance, Ms Garlick of Meta said:

The nature of encryption and the popularity of it is because it provides safety and security for people's messages. I think in the context of the many data breaches that we've seen we can really see the value of encryption.[68]

5.47When asked about the competing interests of individuals' privacy and children's safety, Meta's Head of Global Safety, Ms Antigone Davis, proposed that privacy 'is actually essential to safety'. She gave the example that 'setting in place certain privacy defaults for minors is quite important for safeguarding them'.[69]

5.48Digital Rights Watch advanced that '[f]raming encryption as merely an enabler of criminal activity obscures the essential role that encryption plays in Australia's modern digital society, economy and security'. As well as benefits to Australia's national interests and economy, Digital Rights Watch submitted that encryption is 'critical to ensure children's safety'. It explained that children's devices 'contain personal information that could compromise their privacy and safety (both physically and digitally) if accessed by malicious actors', and also that encryption is 'crucial to prevent access to networked devices, including tapping into users' webcams, microphones or other devices, such as baby monitors or smart toys'.[70]

5.49Digital Rights Watch also suggested that encryption 'supports fundamental human rights':

End-to-end encryption is particularly essential to uphold the right to privacy, but in addition to this, it facilitates freedom of speech and expression, freedom of assembly, and the right to protest. Research has shown that end-to-end encryption is a vital safety tool for protecting human rights, and that the downsides of its implementation do not outweigh the benefits.[71]

5.50Digital Rights Watch pointed out that the United Nations Special Rapporteur on Freedom of Expression 'referred to end-to-end encryption as "the most basic building block" for digital security on messaging apps'. It said the Rapporteur has suggested that 'companies that offer messaging apps "should seek to provide the highest user privacy settings by default"'.[72]

5.51More generally, Digital Rights Watch proposed that:

…while technology can and should play a role in tackling the issue of CSAM and child exploitation more broadly, centralised techno-solutionism in response to complex social problems often results in shortsighted or sometimes actively harmful tech policy and legislation. Complex social problems require holistic responses, in which technology can play a part but should not be the only mechanism. We urge the Committee to consider the range of possible negative consequences that can arise from goodintentioned techno-centric proposals.[73]

Concerns about widespread end-to-end encryption

5.52Notwithstanding the submissions discussed above, the committee heard serious concerns about how endtoend encryption may reduce the ability to detect and respond to child exploitation.[74] This issue attracted particular attention in light of Meta's plan to apply endtoend encryption to Facebook Messenger and Instagram Messenger.[75]

5.53In October 2020, the then Minister for Home Affairs, the Hon Peter Dutton MP, signed an international statement on end-to-end encryption and public safety. The statement has also been signed by the United Kingdom (UK), United States, NewZealand, Canada, India and Japan. It expressed support for strong encryption but also said that '[p]articular implementations of encryption technology…pose significant challenges to public safety, including to highly vulnerable members of our societies like sexually exploited children'.[76]

5.54The Department of Home Affairs described how anonymity can enable child exploitation:

The anonymity afforded by end-to-end encryption not only enables predators to groom victims on a social media platform, it also allows these criminals to safely connect and share tactics on how to perpetrate child sexual abuse, share explicit images, arrange live streaming of child sexual abuse with facilitators in vulnerable countries, and avoid law enforcement.[77]

5.55Several inquiry participants highlighted NCMEC estimates that, if Meta proceeds with its planned expansion of end-to-end encryption, the amount of reports to NCMEC may reduce by more than half.[78] For instance, the Australian Institute of Criminology noted that Facebook and Instagram are 'two of the largest social media platforms in the world' and submitted:

As most CSAM on Facebook Messenger is detected using PhotoDNA and artificial intelligence tools, NCMEC has estimated that Meta's implementation of end-to-end encryption across all its major platforms will reduce the number of CSAM reports it receives by more than 50 percent. This does not mean that CSAM offending will be reduced; rather, Meta will no longer be able to detect it.[79]

5.56This was further explained by the Institute's DeputyDirector, Dr Brown, who said there are 'essentially two common forms of technology that are used' to detect CAM, being 'photo DNA, as a way of identifying existing material, and the use of AI to identify material which is likely to be CSAM, which is then confirmed with human moderators'. He advised that '[n]either of those deal with the encryption issue'.[80] Dr Brown also elaborated on how endtoend encryption contributes to an 'environment of impunity', stating:

We published something in July that looks at end-to-end encryption. What we show is the upward trend in CSAM over time, but, interestingly, we see dips in that trend that coincide with large tech companies introducing endto-end encryption. So we can show that, when WhatsApp implemented end-to-end encryption, there was a decline in the reporting of CSAM. When Snap introduced end-to-end encryption, you saw a similar decline. The National Center for Missing & Exploited Children suggests that, if Meta go down the line of introducing end-to-end encryption of all its messaging, then we'll see a 50 per cent reduction in reports to NCMEC too.[81]

5.57A further example of reduced reporting was provided by the Uniting Church Synod:

In the first half of 2021, due to an unintended consequence of new EU [European Union] privacy laws, Meta stopped voluntarily scanning its platforms in the EU. During that time, the US National Centre for Missing and Exploited Children recorded a 58% reduction in reports of online child sexual abuse content. The reduction in detection demonstrates the disastrous consequence to curbing child sexual abuse online if automatic detection tools are blocked by end-to-end encryption.[82]

5.58It should be noted that other technology companies—as well as Meta—also ceased voluntary detection during this period.[83]

5.59eSafety submitted that social media's increasing use of end-to-end encryption will make online child exploitation investigations 'significantly more difficult':

It will create digital hiding places, and platforms may claim they are absolved of responsibility for safety because they cannot act on what they cannot see.[84]

5.60The Australian Institute of Criminology observed that 'end-to-end encryption presents significant challenges to law enforcement officers who investigate CSAM offending, and limits companies' ability to prevent, detect and report CSAM occurring on their platforms'.[85] It cited cases in which key evidence may not have been available if endtoend encryption had been in place:

For example, online chat logs are a key form of evidence in CSAM investigations. In one such Australian case, the offender used several popular platforms to distribute CSAM he had produced, which involved severe abuse of babies… In this case the offender's chat logs were used as evidence to demonstrate the severity of offending that took place. In another case, Meta detected CSAM in a conversation between an Australian man and a Filipino child, leading to the man's arrest when he travelled to the Philippines. If the platforms used by these offenders had implemented endtoend encryption at that time, this evidence may not have been available for investigations and the offenders may still be at large.[86]

5.61The AFP submitted that '[t]he introduction of E2EE [endtoend encryption] will have an adverse impact on the AFP and Australian law enforcement to identify and disrupt instances of online child sexual exploitation'. It elaborated with specific examples:

The AFP and the ACCCE are concerned that the following type of scenarios could eventuate should Meta move to E2EE:

Without content visibility, NCMEC is not alerted by Meta to incidents of child abuse material and grooming, a subsequent report to the ACCCE is not made, and an investigation does not begin.

Without having the ability to examine a suspect's communications on any local devices, law enforcement may be unable to form the suspicion required to seize the device and conduct detailed forensic examination of it in order to obtain evidence.

Without being able to view an accused's communications, investigators cannot review the conversation and identify the victim for rescue.

Without being able to view the content of an accused's conversation, the Commonwealth Director of Public Prosecutions (CDPP) may not be able to prove these elements or determine the full extent of offending.[87]

5.62The UK National Crime Agency observed that 'while some offenders deliberately seek to encrypt material, others who are less concerned with or aware of encryption have benefited from the increasingly common presence of encryption as default on online platforms and services'. It reported that, in 2020, UK authorities made 'over 5,500 arrests and safeguarded over 7,000 children in the UK alone'. However, these 'important outcomes would be severely impacted by Facebook's plans to move its Messenger service to become end to end encrypted'. The National Crime Agency advised the UK Government's view is that:

…the reality of Facebook's current plans will be marginal gains to the average user at the expense of serious risks to public safety, as terrorists, child sex offenders and other criminals would be able to communicate beyond the reach of law enforcement.[88]

5.63In a recent lecture the National Crime Agency's Director-General, MrGraemeBiggar, expressed serious concerns about end-to-end encryption and suggested it is possible to provide privacy while still enabling lawful access:

I strongly support encryption. It is an important protection from a range of crimes. But the blunt and increasingly widespread rollout by the major tech companies of end-to-end encryption, without sufficient protection for public safety, poses a fundamental and negative implication.

It means they cannot protect their own customers, by identifying the most egregious illegal behaviour on their own systems. Each platform brings different risks, and the [United Kingdom] Online Safety Act recognises this, requiring companies to ensure safety within the services they are providing. If Facebook roll out end to end encryption their ability to spot child abuse will significantly reduce, as will the number of children we save from sexual abuse and the number of criminals we arrest on the back of their information. Let me be clear: this would be tantamount to consciously turning a blind eye to child abuse – choosing to look the other way.

It does not need to be like this. Despite the protestations of some, this does not need to be a binary choice: there are ways of providing for strong encryption and privacy, and still protecting customers and enabling lawful access. Ultimately, it appears to me that fundamental decisions on the balance between privacy and security are for democratically elected governments to make, not multinational corporations.[89]

5.64The National Crime Agency also submitted the example of DavidWilson, who was convicted of child sex offences in the UK (see box 5.1 below). It posited that necessary evidence would have been lost if Facebook had implemented endtoend encryption at the time.[90]

Box 5.1 The conviction of David Wilson for child sex offences

The UK National Crime Agency submitted:

An example of the low bar to offending, and how that leads to severe and new offending, with awful consequences for the victims is the case of David Wilson, who following an NCA investigation was recently imprisoned in the UK for 25 years, extended to 28years following an unduly lenient sentence appeal.

Wilson approached more than 5,000 boys worldwide and tricked at least 500 into sending him sexual videos and images of themselves. He scoured social media sites for vulnerable victims, and was able to set up false, unverified profiles of teenage girls, and share sexual images purporting to be those girls to pique the victims' interest. Through deceit he was then able to entice them to share images of themselves, before blackmailing them into more severe acts, including abusing younger siblings or friends, by threatening to share the initial images.

In this high volume, complex case, the content of 90 Facebook referrals via NCMEC was critical to acquiring evidence to bring Wilson to justice, and Facebook's response to an International Letter of Request included over 250,000 messages to victims.

Critically, had Facebook's plans for end to end encryption of its Messenger service been implemented, that evidence would have been lost. This is not just in relation to the Wilson case, the impact would be far wider; the tech industry made 21 million CSA referrals last year, and based on industry referrals the NCA and UK policing made over 5,500 arrests and safeguarded over 7,000 children in the UK alone.

The case of David Wilson, and many others, makes it clear that to stop the pathway of escalation into severe offending, there needs to be zero tolerance to the presence of CSAM on industry platforms and systems.[91]

5.65ECPAT International recognised that many stakeholders 'cite legitimate privacy concerns from users of their platforms as a reason' for adopting end-to-end encryption but emphasised the risks it presents for child exploitation. It submitted that 'online service providers must not be allowed to prioritize the privacy of all users through E2EE [end-to-end encryption] over the right to protection for all children and indeed the right to privacy for victims of online child sexual exploitation and abuse'.[92]

5.66The Uniting Church Synod cited the Virtual Global Taskforce, which it said 'rightly points out that end-to-end encryption places the right to privacy of people producing and distributing child sexual abuse material over the right to privacy of their victims who should not have their images shared'.[93] DrMarkZirnsak of the Uniting Church Synod also submitted that:

…at times in this space I often have a significant disappointment with sections of the human rights community who appear to champion the right of privacy over all other human rights. If you read their submissions in this space around online regulation, they will not acknowledge that the abuse of children in the online space is a human rights abuse. I find that truly bizarre. Even when it comes to the right of privacy, which they claim to be upholding, they don't seem to give any acknowledgement to the violation of the right to privacy of survivors when images of their abuse are being posted online and when platforms are not taking steps to remove the material. So there is more than just the right to privacy of potential offenders; there is also the right of privacy of the victims and survivors of child sexual abuse.[94]

Responding to child exploitation on platforms using end-to-end encryption

5.67Meta recognised that end-to-end encryption 'poses a legitimate policy question: how to promote the safety of users if you're not able to see the content of their messages?'[95]

5.68One measure highlighted by Meta's representatives measure was prevention.[96] For example, Meta's Global Head of Safety, Ms Davis, pointed out that people under the age of 18 who join Instagram are 'defaulted to a private account' and 'unconnected adults cannot directly message minors on our platform'.[97] She also referred to 'making it harder for potentially suspicious accounts to search for, find or discover minors' as well as presenting users with safety notices that 'help to spot suspicious activity and encourage people to take action and report or block when something doesn't seem right'.[98]

5.69When asked what proportion of its CAM detections will no longer be possible with end-to-end encryption, Ms Davis said 'I don't have a specific number for you' but 'I do believe we will see numerous reports'.[99] Ms Davis highlighted that WhatsApp, which is already end-to-end encrypted, made 400,000 reports to NCMEC in 2020.[100] In 2021, this increased to 1.37 million reports.[101]

5.70Meta said this detection is possible on an end-to-end encrypted service by 'using advanced technology to proactively scan unencrypted information – including user reports – and to evaluate group information and behaviour for suspected sharing of CSAM'.[102] Ms Garlick highlighted the upward trend in WhatsApp reports and said '[t]he nature of these behavioural models is that machine learning builds on itself'.[103] Meta's Head of Public Policy for Australia, MrJoshMachin, described the kind of behavioural signals that are examined:

A normal WhatsApp user, say, like you or I, will have a particular pattern. We'll probably message most people one to one. Maybe we're in a couple of groups. Maybe if we're very popular there will be a large group of people. But a user who is part of a sophisticated criminal enterprise has different behaviour on encrypted services than an ordinary user. Even if we are not able to see the content, there's some pretty effective work we've been able to do to analyse the metadata based on the behaviour of the individuals involved.[104]

5.71In addition, Ms Davis posited that the number of NCMEC reports 'is not necessarily a great proxy for what's actually happening out there'. She cited analysis which found that 90percent of Meta's reports 'were of visually similar images of things we've previously reported', and half the reports 'came from just six videos'. MsDavis confirmed that '[t]his doesn't mean that the individual in that video isn't being harmed' and that 'any one share is harmful to the victim'. She explained:

You can see that numerous reports are not things that law enforcement would act upon and, in fact, get in the way sometimes of their ability to identify the most important reports for acting. Those numbers are not necessarily a really good proxy of how you measure success in this area.[105]

5.72Google also acknowledged there are risks that encrypted communications can be 'misused and abused by bad actors', but advanced that 'there are appropriate tools to fight the spread of CSAM even in encrypted environments'. It submitted:

Strong encryption doesn't create a law free zone; companies can still deploy several anti-abuse protections using metadata, behavioural data, and new detection technologies without seeing the content of messages encrypted in transit (thereby respecting user privacy). Our work to increase the cybersecurity posture of users while enabling law enforcement agencies to investigate and solve crimes demonstrates that the goals of public safety and user security are compatible.

We need to find ways to enable this work without engineering vulnerabilities into products and services in ways that weaken security for all users.[106]

5.73A representative of Google, Ms Cashman Kirstein, further explained:

We make every effort to ensure that our responses to those lawful requests for information are as robust as possible. Depending on the case itself, while we may not be able to provide the content in an encrypted message itself, we would be able to, through valid legal process requests, provide things like metadata or other signals that could be related to that message itself.[107]

5.74Ms Cashman Kirstein confirmed that 'when a message is encrypted, for example, we would not be able to see into the encrypted message'. She said:

While we would not be able to provide information on the message itself, it is possible that any kinds of files could be detected on other parts of our services that might be sent in an encrypted message, so it could be detected on another part of our services, but we would not be able to detect it inside that message itself.[108]

5.75Meta emphasised that it is not viable to offer end-to-end encryption while also scanning the content of users' messages:

Some stakeholders are calling for the creation of a "backdoor" that would grant them power to read certain content. But it isn't that simple. Creating a backdoor requires building a structural weakness into a secure system used by billions of people every day. Once the weakness is there, we cannot choose who finds it. Cybercriminals are well resourced and technologically skilled: a backdoor for the good guys is just an open door for criminals.[109]

5.76In a similar vein, the Communications Alliance reported that '[m]ost experts are of the view that enabling for the circumvention of encryption for any users can undermine the security of all users'.[110] In a joint submission, the Communications Alliance and Australian Mobile Telecommunications Association recognised that encryption is used to conceal criminal activities and submitted that it is:

…key, to the extent technically possible, to rely on a secure framework that safeguards individual freedoms and privacy of individuals, including the privacy afforded through encrypted communications, while simultaneously allowing [law enforcement agencies] to pursue their goal of upholding and enforcing law and order where there are reasonable grounds to believe that those are at risk.[111]

5.77Digital Rights Watch raised concerns about 'client-side scanning' systems, which 'scan message contents (text, images, videos, files etc) for content matches or similarities within a database before the message is sent to the intended recipient'. It observed that client-side scanning 'is sometimes presented as a way to scan content without breaking end-to-end encryption', and '[w]hile in some cases this may be technically true, this claim has been refuted'. Regarding clientside scanning, Digital Rights Watch said:

it undermines the promise of private and secure communications

it facilitates the ability to monitor communications at scale, far beyond the detection of CSAM

it creates vulnerabilities for criminals to exploit by creating additional ways to interfere with communications (increasing the 'attack surface').[112]

5.78Digital Rights Watch was also concerned about deploying automated technologies to detect CAM that has not previously been identified, including because of 'the risk of both over- and under- capture of content'. It gave an example:

In a recent widely reported case, a father took images of his son's genitals, at the request of a nurse, and sent them to his doctor. The image was uploaded automatically to Google, which classified it as abuse and suspended his accounts, which he did not get back. Law enforcement and intelligence agencies, as well as tech companies themselves, generally have very little liability for the cost of false positives, despite the immense harm that can follow as a result.[113]

5.79Some inquiry participants suggested principles that should apply to platforms that use end-to-end encryption. For instance, ECPAT International submitted:

Considering that over 50% of Internet traffic is already encrypted, companies have an obligation to invest in the development and deployment of systems and tools that can aid investigations and detection of child sexual abuse material and other harmful behaviours in their end to end encrypted platforms that are as efficient as systems relying on content detection operating in non-encrypted environments.[114]

5.80The Uniting Church Synod highlighted five principles to safeguard children in endtoend encrypted environments, proposed by NCMEC in February 2020:

1. Do not implement end-to-end encrypted communications for accounts where a user has indicated they are under 18 years old;

2. Implement detection technologies, at least as effective or better than those currently available, to prevent offenders from distributing child sexual abuse material.

3. Adopt technology vetted by the child protection community to identify sexual grooming of children by adults;

4. Promptly report apparent child sexual exploitation to NCMEC's Cyber Tipline with actionable information to help rescue child victims and hold offenders accountable; and,

5. Ensure that law enforcement can use existing legal process to effectively investigate the sexual exploitation of children.[115]

5.81Dr William Stoltz proposed that the application of end-to-end encryption should be 'context dependent'.[116] Dr Stoltz recommended the government ask the Department of Infrastructure, Transport, Regional Development and Communications[117] to explore regulatory options in this area and advanced that:

…modern users should be able to access encrypted comms for things like engaging with loved ones, conducting business transactions or, as would be the case for committee members, discussing affairs of state. However, I doubt anyone, Meta included, could mount a reasonable justification for unsolicited communications between an unknown adult and a minor to be encrypted.[118]

5.82When asked about technical possibility excluding those under 18 years of age from encryption, Ms Davis of Meta said it is 'not possible to create a partial encryption system that wouldn't undermine encryption for other individuals engaging on the platform'. Ms Davis pointed to other protections that are compatible with encryption, such as 'enabling the blurring of images and allowing people to do that to manage when messages come in, or preventing people from being able to contact someone, or optional reporting in moments where we think someone may be encountering harm'.[119]

5.83Following Meta's appearance before the committee in 2021, the Department of Home Affairs made a supplementary submission expanding on its concerns about end-to-end encryption. It welcomed Meta's 'acknowledgement of the need to balance privacy and safety' but said it remains 'unconvinced that their [Meta's] current plans to adopt end-to-end encryption will not be detrimental to the ability to keep children safe from online child sexual abuse'.[120]

5.84One concern the Department of Home Affairs raised was an anticipated reliance on reporting by victims.

Meta has indicated that following the implementation of end-to-end encryption on Messenger and Instagram Direct, there will be a significant dependence on victim reporting. Law enforcement will also rely more heavily on victims having to screen shot or capture images, videos or chat logs as evidence of an offence.

Placing the onus on victims to report abuse will mean law enforcement can only respond to victims where abuse has already taken place, and the consequent harms have been realised. While secondary intervention approaches are important in addressing abuse and protecting at risk children from further exploitation, primary prevention strategies are key to preventing long-term harms, and addressing the underlying causes of child sexual exploitation and abuse.[121]

5.85Regarding Meta's plan to use tools such as artificial intelligence and machine learning to detect problematic behaviour signals, the department said:

Despite repeated requests, the Department has not been provided with any verifiable evidence to suggest that these tools are effective. Further, despite repeated requests, the Department has not been provided with any examples of what a CSAM referral to NCMEC may look like following the implementation of end-to-end encryption. This means the Department is not able to determine if reporting will contain sufficient information in the form of indicators to facilitate further law enforcement investigations.

The Department is also concerned that once Messenger and Instagram Direct move to default end-to-end encryption, the artificial intelligence and machine learning tools will no longer be able to detect and learn from new content, meaning their effectiveness would diminish rapidly.[122]

5.86A representative of the Cyber Security Cooperative Research Centre, MsAnneLouiseBrown, acknowledged technology companies' evidence about the use of metadata but suggested that offenders will adapt to avoid detection:

The Google submission, in particular, raises that they can use metadata and behavioural analytics to help locate and stop child exploitation material online. The problem with this is that it really does, in my understanding, rely on behavioural aspects. So the metadata can correlate patterns. It's like pattern theory. It will find patterns in a particular behaviour and the anomalies within that behaviour and pinpoint that, which is fantastic until the criminals perpetrating this kind of activity realise that they are being tracked or traced via this form of behavioural analysis, at which point, they will change their behaviour. That's the nature of this kind of crime; it's to remain undetected. So, while encryption provides that safety and behavioural analytics can be used to detect it, it's only a matter of time before these criminals pivot again to something new and just go dark again.[123]

5.87Concerns about the utility of technology platforms' reports under end-to-end encryption have been raised by the Virtual Global Taskforce.[124] In a position statement on end-to-end encryption, provided to the committee by the UK National Crime Agency, the taskforce says end-to-end encryption is:

…designed to prevent industry partners and others from accessing user content, requiring them to rely on AI to detect behavioural indicators through metadata. While much can be deduced from metadata, it is usually insufficient to meet the threshold required for a search warrant. Furthermore, the companies themselves advise that oftentimes individuals identified through these methods only meet their policy threshold for warnings or exclusion from some features on the platform, and do not trigger a report to law enforcement.[125]

5.88A further concern of the Department of Home Affairs related to the notion that end-to-end encryption is 'all or nothing', which the department rejected:

It has been demonstrated that it is possible to develop tools that allow scanning for CSAM in a fully end-to-end encrypted environment, without impeding on a user's privacy.

For example, on 6 August 2021, Apple announced the rollout of a new feature called 'NeuralHash', which allows on-device scanning of images to detect CSAM on iOS devices. The new feature will reportedly detect a hash match against a database of known child abuse imagery before an image is uploaded to iCloud Photos, within an end-to-end encrypted ecosystem. Child sexual abuse material that is detected will then be referred to the USbased National Center for Missing and Exploited Children (NCMEC) for triage and investigation.[126]

5.89In submitting that the 'adoption of end-to-end encryption by more ESPs [electronic service providers] will likely provide a haven for CSAM offending, rather than preventing it', the Australian Institute of Criminology said:

Further policy discussions are required about how to address the risk that end-to-end encryption will increase the difficulty of detecting, preventing and investigating CSAM offences, taking into account the impact on current and future child victims. These discussions should also consider the development of detection tools that could operate in the end-to-end encryption environment.[127]

5.90The Department of Home Affairs also observed that Apple's announcement received 'significant backlash from privacy advocates, and it is now unclear if Apple will implement this technology as previously intended'.[128] It was later reported, in December2022, that Apple decided not to scan devices and iCloud photos for CAM.[129]

5.91In October 2022 (before that announcement), eSafety advised the committee:

There are a number of developing solutions that would ensure illegal activity online can be addressed that do not compromise encryption and allow lawful access to information needed in serious criminal investigations. Emerging solutions include using implementing proactive detection tools at transmission, at the device level (as Apple is exercising with its safety prompts for children sending/receiving nudity in iMessage, launched in April 2022 in Australia).[130]

Cooperation and information provided by technology providers

5.92As the South Australian Commissioner for Children and Young People described, one of the key practical challenges for law enforcement in responding to online child exploitation is that:

…these crimes have no borders, yet legal powers of investigation and enforcement are jurisdictionally confined and mutual assistance between governments is logistically lacking for a range of reasons.[131]

5.93The committee heard evidence from technology platforms about their interaction with law enforcement and responsiveness to requests. For example, Google advised that it has 'a longstanding and well established process for responding to lawful requests from Australian law enforcement agencies to access Google account data'. It also said that its 'policies for lawful data access require that we recognise the jurisdictional constraints that law enforcement agencies operate within'.[132] Google observed that US law:

…allows Google and other service providers to voluntarily disclose user data to governmental entities in emergency circumstances where the provider has a good faith belief that disclosing the information will prevent loss of life or serious physical injury to a person.[133]

5.94TikTok submitted that it is 'committed to responding to law enforcement requests for user information, disclosure or preservation in a manner that respects our legal obligations with respect to user privacy'. TikTok said that it carefully reviews law enforcement requests on a case-by-case basis, and its policies and procedures 'require that TikTok discloses or preserves user data only where a request is based on a valid legal process or in emergency circumstances'.[134]

5.95In addition, MsWoods-Joyce advised that TikTok reports all CAM detections to NCMEC and, if an imminent risk is detected, it will 'in parallel reach out to law enforcement in Australia'. She said TikTok may also receive feedback from the Australian Centre to Counter Child Exploitation about 'behaviours that they would like us to be aware of', and this 'helps us to inform the work that we're doing in a platform environment'.[135]

5.96Meta's Regional Director of Public Policy, Ms Garlick, explained that Meta has 'a dedicated liaison team' for law enforcement to engage with, as well as:

…an online portal where they [law enforcement] can submit any requests for access to data, and we work to process those, we triage them and make sure that we action the most serious ones as fast as possible, and then work constructively with them in a wide range of activities as well to promote greater awareness around online safety and things like that.[136]

5.97A representative of Google, Ms Longcroft, reported that, during 2022, Google received 'more than 6,000 data access requests directly from Australian law enforcement agencies', and 62 requests under Google's 'emergency disclosure policy, which facilitates access to account data in urgent circumstances where life is at risk'.[137] TikTok said that its 'engagement and education efforts with Australian LEAs [law enforcement agencies] have resulted in an increase in data disclosure requests over the last 18 months', rising from one request from Australian authorities in the first half of 2021 to 87 in the first half of 2022.[138]

5.98The Communications Alliance also confirmed that '[a]ll providers have an emergency process for requesting information which is accessible at all times and, depending on the provider, yields a result within hours'.[139] In addition, the Communications Alliance and the Australian Mobile Telecommunications Association said that carriers and carriage service providers 'already provide significant levels of assistance' to law enforcement and intelligence agencies under a range of legislation, as well as voluntary assistance.[140]

5.99Both Google and TikTok referred to training they provide for law enforcement agencies.[141] For instance, TikTok submitted that its Law Enforcement Outreach team 'is responsible for ensuring Australian LEAs are familiar with our Law Enforcement Guidelines and provides guidance during the submission of legally valid preservation and data disclosure requests as required'.[142]

5.100However, evidence from other inquiry participants suggested that the responsiveness of technology providers to requests from law enforcement varies.[143] For instance, Victoria Police conveyed its experience:

Google have been the most forthcoming in providing relevant information to Victoria Police, when requested. The response time is the shortest, two to three days.

Facebook / Instagram requires an overview of the investigation and a justification as to the relevance of the data. Unlike Google, Facebook / Instagram will reject any request where a clear justification as to the relevance of the information requested has not been provided. The justification and description required for releasing subscriber/IP details for child exploitation matters is much lower than other offence types. The wait time for Facebook replies is approximately one week for informal requests for information. Formal mutual assistance requests (where information is required to be included on a brief of evidence in support of criminal charges) involve a law enforcement agency requesting information from overseas organisations such as Facebook to assist in an investigation. This is currently a convoluted process that may take anything from six months to two years to receive a response.

Other companies such as Microsoft and Snapchat are less inclined to provide responses to law enforcement.[144]

5.101Victoria Police also reported that '[m]any [providers] will not co-operate with any law enforcement agency and many overseas-based providers require a mutual assistance request'.[145]

5.102eSafety reported that 'industry tends as a rule to remove clear CSEM from its networks and storage services' but it can be challenging for hotlines to persuade industry to remove related material. It said that '[o]ften, industry will remove material only when it is illegal within a specific jurisdiction, and in some cases efforts to take down CSEM-related material are met with resistance'. eSafety also pointed to 'reluctance to removing written accounts of adults sexually abusing children or illustrated and drawn depictions of sexual abuse (even though they are prohibited in several jurisdictions including Australia)'.[146]

5.103eSafety expressed concern with 'using illegality as the vector to determine whether industry should act in response to harmful content'. It proposed that '[o]nline platforms should retain the prerogative to identify harmful content based on users' complaints for illegal and harmful content, to safeguard children and all citizens online'.[147]

5.104Moreover, eSafety advised that '[i]ncreasingly, websites that contain CSEM are hosted by network providers that deliberately obscure their corporate footprint'. Indeed, some technology providers:

…openly market themselves as being 'bulletproof' implying that they are resistant to takedown and disruption and with a high tolerance to hosting illegal content. Removal of CSEM by INHOPE members, industry and law enforcement can be complicated by these tactics.[148]

5.105The Uniting Church Synod expressed concern that certain technology corporations are, among other things, '[o]bstructing reasonable requests from police investigating child sexual abuse' and '[n]ot responding quickly enough in removing posts of child sexual abuse'.[149] It further suggested:

There are online technology corporations that have an ideological position that the privacy of their clients is paramount. The position leads them to be reckless in designing services that frustrate efforts of police to stop child sexual abuse, terrorism and other dangerous criminal activity.

Other online technology corporations and their management argue they will only assist police to the extent that they are forced to do so by the law.[150]

5.106The Uniting Church Synod explained a case study relating to a sex offender sentenced in July 2017 to a decade in a US state prison. Police sought access to emails that could help them pursue the wider network with which this offender was involved:

Although Google tipped off police about the child abuse files that had crossed its network, the corporation refused to give them access to his [the offender's] Gmail account, even though police had a search warrant.

Google's argument was reported to be that the data was "out of jurisdiction." The corporation argued some of the data in that Gmail account were stored on Google servers outside the United States and, since a court ruling in 2016, technology companies were not required to turn over that information.

This highlights the problem where technology corporations can frustrate police investigations by choosing where they say the online data is physically located. Such a choice can be arbitrary, at the total discretion of the technology corporation.[151]

5.107This example was put to Google's representatives. Ms Longcroft said that '[e]nsuring that we have timely response to law enforcement agencies is a critical concern' and, regarding Google's 'compliance with the eSafety Commissioner's request in this area, we had a 100 per cent compliance rate within a 24-hour period'. Ms Longcroft also said:

Requests for access to content are required in order to balance the protection of user privacy and due process with law—both Australian and US law—requires us to go through the procedure that [Ms Cashman Kirstein] had earlier outlined, making requests through traditional processes, and those processes, when it is a content related request, must go through the procedures or else they would violate US law.[152]

5.108Ms Longcroft also suggested that procedures will improve following a recent agreement between Australia and the United States under the US Clarifying Lawful Overseas Use of Data (CLOUD) Act—which is discussed below.[153]

Mutual legal assistance and alternative procedures

5.109An existing formal method for obtaining overseas data for use in court proceedings is the mutual legal assistance process. The Department of Home Affairs explained that under this process, which was 'designed before the internet and without considering the nature of modern telecommunications networks', it can take 'a significant amount of time' to receive data from foreign jurisdictions. Alternative methods exist, such as 'police to police and agency to agency assistance', but these approaches 'can also be lengthy, and can result in information being provided that is not admissible in court due to the requirements of the Foreign Evidence Act 1994'.[154]

5.110The department detailed an example of how delays in mutual legal assistance 'can frustrate the successful investigation and prosecution of child exploitation offences'. In the example:

…it took almost 3 years from the date of initiating the mutual assistance request process to receive working copies of the relevant material, with a further 18 months until receipt of the formally sealed [mutual assistance request] material.[155]

5.111Ms Mary-Jane Welsh of Victoria Police described the existing mutual assistance process as 'incredibly cumbersome and time consuming' and said that 'it's not unheard of for us to have a person before the court and have the matter finalised and then, shortly after, for us to receive information back via a mutual assistance request'.[156] In addition, NSW Police submitted that the wait times for mutual assistance 'can lead to prosecutions being abandoned'.[157]

5.112The Uniting Church Synod said it has 'spoken with prosecutors who say that if evidence for a trial needs to be obtained by a Mutual Legal Assistance request, then they build their case on the assumption the information will not be available in time to be used'.[158]

5.113The Communications Alliance submitted that delays in the mutual assistance process 'are attributable to the complex and lengthy legal process involved and coordination between multiple government agencies via MLAT [mutual legal assistance treaty], rather than platforms' times to valid legal requests'.[159]

5.114A forthcoming alternative method for obtaining overseas data is provided by the Agreement between the Government of Australia and the Government of the United States of America on Access to Electronic Data for the Purpose of Countering Serious Crime, signed by Australia and the United States on 15December2021.[160] This agreement was enabled by the Telecommunications Legislation Amendment (International Production Orders) Act 2021 in Australia, and the CLOUD Act in the United States.[161]

5.115The AttorneyGeneral's Department explained that 'the United States is the largest data controller in terms of communications technologies, services and platforms, which means critical evidence of child exploitation offences is most often located within the United States'. It submitted that the arrangements (which will enter into force upon the exchange of diplomatic notes with the US) will:

…reshape Australia's international crime cooperation efforts by expediting the process for obtaining electronic data held in foreign countries. The Agreement achieves this by facilitating direct access to electronic data for investigations of serious crime between the jurisdictions of a foreign country and Australia. The Agreement enables authorities in each country to obtain certain electronic data directly from prescribed communication providers operating in the other's jurisdiction, significantly reducing the time taken to obtain information relevant to the ongoing detection, prevention, investigation and prosecution of serious crime. The Agreement will complement existing international crime cooperation mechanisms, sitting alongside current frameworks such as mutual legal assistance.[162]

5.116Meta and Google expressed support for this forthcoming process. Meta's Head of Public Policy for Australia, Mr Machin, said the new process would 'make a very significant difference in providing a much clearer and easier legal framework for companies to provide Australian law enforcement assistance in ways that are much faster'.[163] In a submission made before Australia signed the agreement with the US, Google encouraged Australia to do so 'as this will not only expedite lawful requests to access communications content but also improve safeguards around the production of evidence in Australian criminal proceedings'.[164] In addition, Ms Longcroft advised that Google is 'working closely with the Australian Attorney-General's Department on the administration of that [CLOUD] act, and we expect that it will significantly improve the procedures that we are able to follow in compliance with Australian and US'.[165]

5.117The Communications Alliance and Australian Mobile Telecommunications Association confirmed that their members 'will assist agencies through any legislated measures that will flow out of the Telecommunications Legislation Amendment (International Production Orders) Act 2021 and any associated international treaties'.[166] The Communications Alliance also said it anticipates that 'once the Australia-US CLOUD Act agreement comes into effect, this should significantly shorten turnaround times for law enforcement requests'.[167]

5.118The Cyber Security Cooperative Research Centre suggested that 'authorities will no longer have to rely on the cumbersome and outdated mutual legal assistance regime, with data requests that previously may have taken years to process now accelerated to a processing time of just months'.[168]

5.119Detective Superintendent Jayne Doherty, NSW Police, told the committee that the CLOUD Act 'will help us with American based ones [content providers] but we don't have that same interaction with all countries and that's what we need'. She added that '[t]he borders don't exist on the internet, so they shouldn't exist in the legislation either'.[169]

Other interaction between technology companies and law enforcement

5.120Victoria Police highlighted a range of challenges for law enforcement in relation to technology providers, including that covert law enforcement operations may be disrupted by technology companies. It explained:

Covert online operatives invest considerable time and effort developing their online account profiles. Covert online operatives are sometimes identified by the provider as breaching their terms of service, which causes the covert operator's account to be shut down. The evidence gathered by the covert operative is then lost. A national approach to developing partnerships with technology providers would be beneficial. While JACET [Joint Anti-Child Exploitation Team] has established direct relationships with Facebook, Instagram, WhatsApp and Twitter, demonstrating that law enforcement agencies can secure relationships with large overseas providers, these arrangements rely to an extent on the goodwill of the providers.[170]

5.121The Uniting Church Synod expressed concern about 'ICT corporations that reserve the right to tip off suspected offenders that they are under investigation'. It submitted:

Tipping off offenders places victims and witnesses in danger. It also allows an offender to destroy evidence. Often offenders will use multiple platforms and communication devices. Thus, even if data on the platform tipping off the offenders were to be preserved from destruction, after being tipped off, the offender might be able to destroy evidence on other platforms.[171]

5.122Collective Shout submitted that '[o]ne problem is that the ISP notifies a user that their account is being shut down, giving them time to cover their tracks before they are contacted by law enforcement'.[172]

Strengthening obligations on technology providers

5.123As illustrated throughout this chapter, there are concerns about the adequacy of efforts by technology companies to combat online CAM. This has also been reflected in eSafety processes; for example, eSafety recently reported on the responses that several major technology companies provided about their measures to deal with online CAM, and commented that its report:

…highlights serious shortfalls in how some companies detect, remove and prevent child sexual abuse material and grooming, inconsistencies in how companies deal with this material across their different services and significant variations in the time it takes them to respond to public reports.[173]

5.124During this inquiry, a range of participants expressed concern about the adequacy of the action taken by technology providers and the obligations placed upon them. For instance, Victoria Police submitted:

Online and media platforms, and internet service providers (ISPs), that fail to disclose child exploitation offences and fail to establish safety mechanisms to protect children who use their service from online harm, do not appear to be sufficiently penalised.[174]

5.125The South Australian Commissioner for Children and Young People recommended there be recognition that 'the digital environment is simply one more "place" that exploitation of children and young people can occur. The same fundamental principles should apply regardless of the location of the exploitation, whether in the digital environment or not'. It proposed:

The fact that technology providers and Internet Service Providers create and facilitate these unique environments means that the law should recognise (1) they owe a positive duty of care / duty to act in the best interests of their users and (2) owe a special duty of care to their most vulnerable users (children and young people).[175]

5.126The Australian Institute of Criminology pointed to the 'significant amounts' of child sexual abuse material detected on popular social media platforms, as well as its rapid distribution, and posited that the 'problem is beyond the capability of law enforcement to address alone'. It submitted that 'the companies that run these platforms have a responsibility to prevent offending and remove abusive material'. The Australian Institute of Criminology presented several proposals, including:

Firstly, every company should be consistent in their reporting of CSAM and transparent about their definitions of CSAM and the specific measures they use to prevent, detect and report it. Providing this detailed information will help enforce best practice standards and assist companies to improve their tools for preventing harm to children.

Secondly, more responsibility should be placed on ESPs [electronic service providers] to prevent CSAM from being uploaded in the first instance. These platforms should adopt evidence-based methods such as pop-up warning messages, which can deter the viewing or sharing of CSAM and refer individuals to sources of help. Deterrence messaging campaigns can also reach large numbers of individuals. These tools should also be evaluated; Meta currently uses pop-up warning messages to deter child sexual exploitation, yet there is no information publicly available on their impact or effectiveness.

Lastly, ESPs should invest in more innovative technology. Currently, NeuralHash, Apple's proposed technology to scan devices for CSAM, is the only publicly described tool that will detect CSAM in an end-to-end encryption environment. Although Apple has delayed the release of this technology, communication platforms should similarly invest in developing technology to prevent CSAM from being uploaded onto their platforms. This would supplement their current methods of detecting CSAM and removing it from their platforms, and will assist law enforcement with investigations.[176]

5.127International Justice Mission posited that technology companies which 'provide the platforms where online child abuse is carried out have a responsibility to prevent and disrupt offending and remove and report abusive material'. It also pointed out that '[t]echnology providers collectively possess both the financial and technological capabilities to prevent and deter this crime by making their platforms safer by design'. As a means of reducing the high volume of reports facing law enforcement, International Justice Mission supported the use of 'preventative technology that halts child sexual abuse material (whether photo, video or livestreamed child abuse) from ever being uploaded to a platform's server'. It provided examples:

The UK-based NGO, Internet Watch Foundation (IWF) partnered with digital forensics company Cyacomb to create an innovative tool that can block known images and videos of CSAM from being uploaded within endtoend encrypted (E2EE) platforms, while still respecting user privacy. Another such preventative technology already in existence is SafeToNet’s product, SafeToWatch. This is a real-time video & image threat detection technology, capable of determining whether visual data represents undesirable and illegal content such as pornography, sexually suggestive imagery, cartoon pornography, and/or CSAM. The machine-learning algorithm can trigger several possible actions, such as obscuring harmful images, disabling image capture/recording/transmission, displaying a warning messages, etc.[177]

5.128International Justice Mission also proposed that '[e]lectronic service providers should be required to make ongoing investments in technological tools that prevent, detect and disrupt CSAM – in particular new material and livestreamed child abuse'. It noted that the recent Social Media Services Online Safety Code (developed through the recent eSafety process) requires 'ongoing investment in technological tools' and also contains 'a requirement to implement systems, processes and/or technologies that aim to detect and remove CSAM from the service'. A further recommendation from International Justice Mission was that digital service providers 'be required to provide sufficiently detailed reports to law enforcement to allow them to make effective use of the information', such as port information as well as IP addresses.[178]

5.129Collective Shout called self-regulation 'a failed experiment' and stated that there is a 'general failure of corporates to prioritise child safety over profit under selfregulation'. It advanced:

It is unreasonable for wealthy and well-resourced tech companies to place the onus of monitoring and reporting child exploitation and predatory activity on their platforms on citizens like ourselves. State parties globally should implement uniform regulations to prevent and penalise Big Tech companies which continue to profit from the trade in child exploitation.[179]

5.130ECPAT International proposed considering legal responsibilities for online service providers to 'promptly comply with law enforcement requests for information, including across borders, to retain data for a minimal period, and to filter and/or block and/or take down child sexual abuse material'. ECPAT also suggested consideration of requiring:

…demonstrable efforts to minimize children's experiences of online child sexual exploitation and abuse on their platforms – safety first approaches. These may impose liability on platforms where sexual exploitation has been facilitated.[180]

5.131The Cyber Security Cooperative Research Centre suggested there is 'scope for a legislative regime to be introduced so that civil or criminal action could be taken against technology providers found to have had CAM published or shared on their platform'. This could 'introduce significant fines for platforms found in breach', and could be an expansion of the Criminal Code Amendment (Sharing of Abhorrent Violent Material) Act 2019 or a 'new, similar regime that deals specifically with CAM published or distributed on tech platforms'.[181]

5.132The Uniting Church Synod presented the following quote from Professor Hany Farid, Electrical Engineering & Computer Sciences and the School of Information, University of California:

From its earliest days, the internet has been weaponised against children around the world. From its earliest days, the technology sector has been negligent in ensuring that their platforms are not used to post child sexual abuse images. From its earliest days, the technology sector has profited while turning a blind eye to the horrific action of millions of their users around the world. This shameful behaviour must end. We must reclaim our online communities and hold the technology sector responsible for their actions and lack of action.[182]

5.133The Uniting Church Synod made a range of recommendations related to increasing the obligations placed on technology platforms, including the following:

Technology providers should be required to have 'robust systems to verify the identity of the people using their service'. These identities would not necessarily need to be public, but would be known by the provider and could be disclosed to law enforcement if needed.[183] Regarding this proposal, Dr Zirnsak acknowledged:

None of those system also be absolutely perfect; we're not naive about that. A really dedicated offender might still go to enormous lengths to conceal and use a false identity, but you're going to make it harder. This is the point, the issue of disruption: the more you can do and the harder you make it for people, the more you are going to deter them from doing that and the more you will remove any perceived sense of anonymity they have that increases their sense that they can get away with this.[184]

Legislation should require online technology corporations to 'detect proactively child sexual abuse material accessed or stored on their platforms or services for the purpose of blocking or removing such materials', and also report the material to the AFP in a useful format.[185]

There is ambiguity about whether a technology provider 'can be held to account under current Australian laws' for the destruction of CAM evidence in a foreign jurisdiction by a content manager sub-contracted by the technology provider, and so Australian law should make it:

…an unambiguous offence for a technology provider not to preserve and report evidence of child exploitation on their platform to law enforcement agencies where an Australian child or offender is involved or where the provider is located in Australia.[186]

In light of concern about the extent to which technology platforms cooperate with law enforcement requests, legislation should be amended to 'ensure that individuals inside technology corporations can be prosecuted for refusing to co-operate with legislative requirements that assist in the investigation or prosecution of online child sexual abuse'. The individuals who could be prosecuted would be 'those that make the decision not to cooperate'.[187] Dr Zirnsak explained:

Our fear is that if it's just the company that gets hit with a financial penalty, for example, they might still think it's worth their while to not comply with a request, whereas if it's the individuals inside the company making the decision to not comply with a lawful request under the legislation, I think you will find that will motivate compliance more.[188]

An existing requirement on internet service providers to 'disrupt ready access to online child sexual material contained on the INTERPOL "worst of" list using Section 313 of the Telecommunications Act 1997 be extended to cover a wider range of child sexual abuse material'. The Uniting Church Synod explained:

For example, the INTERPOL list could be supplemented by the Internet Watch Foundation list. Further, data from attempts to access disrupted material could be provided to the Australian Federal Police in a format that would allow police to analyse and detect users that have a pattern of attempting to access such material.[189]

Legislation should require technology providers to 'have structures in place that allow users to easily report evidence of child exploitation material or activities on their platforms', including allowing a person to report anonymously and without having an account, and enabling reports of 'specific users, user profiles, specific posts, or a combination of the latter'.[190]

5.134Mr Glen Hulley of Project Karma reported that his NGO had held 'grave concerns' about how Meta was responding to CAM and child exploitation, and said that Project Karma 'and many others made a lot of noise about this five, six or seven years ago, and Facebook [now Meta] had to respond. Other governments around the world started pointing fingers too'. He said that Meta (then Facebook) subsequently:

…led the way on it and really began to invest a lot of money into their safety and trust divisions, hiring ex-law enforcement, developing regional headquarters around the world and putting in software—AI, which is still in its infancy and still being developed, but also algorithms and other mechanisms—to protect their licences and not have this type of material on their service.[191]

5.135Project Karma said that it has strong relationships with platforms including TikTok, Roblox and Discord, and expressed support for Meta's Trusted Partner Program. This program includes 'one-off grants to NGO's actively participating in the program, including PK [Project Karma] to further their work'. Mr Hulley explained it also enables Project Karma to make reports directly to Meta's safety officers (who have a process for notifying law enforcement about illegal material).[192] Project Karma proposed that Meta's Trusted Partner Program model be considered 'as a minimum standards model for technology providers' and also that 'it should be legislated as a mandatory requirement for all social media, instant messaging and online gaming platforms legally allowed to operate in Australia and available to people under 18'.[193]

5.136A further specific proposal was advanced by Dr Stoltz, who submitted that the services provided by Facebook (now Meta) 'are of such a significant scale and are so uniquely pertinent to the problem of online child exploitation, that Facebook Inc. should be the subject of special scrutiny and regulation by the Commonwealth'. While acknowledging that Meta's 'overrepresentation' in CAM reports to the NCMEC and AFP 'is likely due in part to high degrees of self-reporting by Facebook Inc. as compared with other technology companies', Dr Stoltz posited:

Facebook Inc.'s over-representation in these and similar statistics is also certainly because of the unmatched scale and the almost perfect suitability of their platforms for the practice of grooming minors. In particular, I refer to Facebook Inc.'s decision to create a seamless pathway between public social media communications (the Facebook platform and Instagram) to encrypted, secret messaging (Messenger and Instagram messaging).[194]

5.137Dr Stoltz recommended that the Minister for Communications declare Meta a 'carriage service provider' under section 87 of the Telecommunications Act 1997. Doing so would 'open avenues for the Commonwealth to hold Facebook Inc. more accountable for the role its services play in enabling online child exploitation'.[195]

5.138When this proposal was put to representatives of Meta, Ms Garlick suggested that Meta already has a 'good relationship' with law enforcement and so 'it might not be necessary to tinker with definitions in telecommunications acts when we've got the ability to work constructively through the existing frameworks'. Ms Garlick described some of Meta's existing efforts, including:

We've set up a dedicated portal. We have a dedicated team to liaise with law enforcement and we can disclose what we call basic subscriber information data quite quickly. Through that process we obviously have emergency channels if there's any threat to life—either we proactively disclose or law enforcement can ask us for assistance through those emergency processes.[196]

5.139Dr Stoltz also recommended the following:

That the Telecommunications Act 1997 be amended to empower the Minister for Home Affairs, 'based on advice from the Commonwealth's eSafety Commissioner', to 'direct a carriage service provider to cease its services where its operations are found to be prejudicial to safety of children'. This would replicate an existing power that the Minister for Home Affairs can exercise 'based on an adverse security assessment by the Australian Security Intelligence Organisation' where a carriage service provider's operations 'are found to be prejudicial to security'.[197]

That Facebook Inc should be required to report annually to the eSafety Commissioner to show how it is 'working to mitigate the risk of child exploitation occurring on [its] platforms and demonstrate why [it] should continue to be able to operate as a [carriage service provider] in Australia'.[198]

Technology company staff based in Australia

5.140A further issue in the inquiry was whether technology companies that provide services in Australia should be required to have staff in Australia.

5.141In February 2023, eSafety's Acting Chief Operating Officer, Mr Toby Dagg, advised that X (then Twitter) did not have any staff in Australia. He also discussed the value of local staff at technology companies:

In particular, there are no Australian staff left there [at X/Twitter], and the Australian complement was a really critical component of Twitter's trust and safety apparatus, as far as we were concerned. It's not only the presence of trust and safety personnel in a company that makes a difference, but the ability for a regulator like the eSafety Commissioner to pick up the phone to a local representative and say, 'Hey, we've got a problem,' or, 'We want to bring something to your attention; we think that something is happening here,' and provide that information. In the past, we found that to be very effective and in fact we had a really constructive relationship with Twitter and quite an effective one, I thought, particularly when it came to child sexual exploitation material. Now we do have contact with regional representatives but it's not quite the same as having someone that you can pick up the phone to and have a face-to-face meeting with.[199]

5.142In subsequent evidence, a representative of X, Mr Pickles, said that X does 'have a presence in Australia', though also said:

Historically, Twitter has always been a globally distributed company. We don't have staff based in every country where the service is available. Particularly for this issue, we would have staff based in specialist centres around the world, and we wouldn't have had staff in every market. Previous to the acquisition, we didn't have specialists working on this issue in Australia. That remains the case.[200]

5.143Regarding TikTok's trust and safety staff, the company explained:

Within our global Trust & Safety workforce we employ specialist staff with regional or country-specific expertise, who work alongside specialists with subject matter expertise who may have responsibilities spanning multiple markets, including Australia (e.g. expertise in minor safety, illegal activities and regulated goods). TikTok employs approximately 400 staff with dedicated responsibility for the Australian market, most of whom work as part of larger teams situated in our APAC Trust & Safety hub in Singapore. We have 4 Trust & Safety team members based in Australia.[201]

5.144Mr Dagg of eSafety was asked whether large technology companies should be required to have local staff:

On the question of policy, I won't comment. But certainly our expectation would be that companies as large as Twitter and others should properly fund and resource local representatives—for a range of reasons, not the least of which being that regulatory environments are particular to a jurisdiction. It can be very frustrating dealing with a company like Twitter at significant remove where they don't understand the particular regulatory conditions that apply within, in this case, the Australian jurisdiction.[202]

5.145When asked whether this requirement was being inserted into industry codes, eSafety advised that it prefers that industry associations adopt 'an outcomes and risk-based approach' to the codes, and Mr Dagg intimated:

I believe that the issue of resourcing is probably one best addressed through the Basic Online Safety Expectations, not necessarily through the codes, because the codes focus on that specific issue of how companies deal with class 1 and class 2 content on their service.[203]

Action by the eSafety Commissioner

5.146The Online Safety Act 2021 commenced in January 2022. The Act introduced new powers for eSafety, some of which eSafety has since described as 'world-first' or 'world-leading'.[204] eSafety advised that under its Regulatory Posture and Regulatory Priorities 2021-22, 'the rapid removal of CSEM continues to be one of our highest priorities'.[205]

5.147A representative of the Department of Infrastructure, Transport, Regional Development, Communications and the Arts, Ms Bridget Gannon, said the legislation has 'been reviewed and revised over the last few years', and so 'it is an updated piece of legislation now'.[206] When asked about the effectiveness of the Online Safety Act, Mr Dagg, from eSafety, advised:

We've found the improved powers to be effective. Prior to the Online Safety Act's commencement, when the Online Content Scheme was contained in the Broadcasting Services Act, our takedown powers were limited to Australian hosted material, which meant there was little we could do other than act informally. While we haven't issued removal notices in relation to child sexual exploitation material in Australia, we have issued removal notices under the Online Safety Act—in particular following the Buffalo terror attack in May this year [2022]. The content we saw was hosted overseas—some of it was hosted on Kiwi Farms, to be precise—and we found the removal notices were effective in about two-thirds of cases. This is an illustration of how the legislation can work to tackle this kind of content. To Ms Gannon's point, we then used our ancillary powers under the act to issue link deletion notices to Google and to have the links removed from the Bing service operated by Microsoft as well.[207]

Online Content Scheme

5.148One area of work is under the Online Content Scheme, which enables eSafety to regulate and require the rapid removal of illegal content such as CAM. The Online Safety Act had the effect of 'strengthening and broadening' eSafety's powers under the pre-existing scheme.[208] Under the scheme, eSafety receives complaints from the public about illegal or harmful online content, and:

Of the investigations we carry forward from these complaints, 99% relate to CSEM and all but a handful of these items are notified to the International Association of Internet Hotlines (INHOPE) network by eSafety for rapid removal within the host jurisdiction.[209]

5.149In 2021-22, eSafety 'notified almost 11,000 CSEM items to INHOPE for removal and law enforcement action in the host jurisdiction'.[210]

Basic Online Safety Expectations

5.150The Online Safety Act also empowers eSafety to 'require online services providers to report on the reasonable steps they are taking to comply with the Basic Online Safety Expectations (BOSE)'. The BOSE were determined by the Minister for Communications. eSafety advised that '[n]o other regulator has equivalent powers'.[211]

5.151eSafety has issued two rounds of BOSE notices to online services that focused on child sexual exploitation and abuse. A first round was issued in August 2022 to Apple, Meta (and WhatsApp), Microsoft (and Skype), Omegle, and Snap.[212] These providers were:

…asked specific questions about the tools, policies and processes they are using to address various forms of CSEA [child sexual exploitation and abuse], such as the proliferation of online CSEA material, the online grooming of children, and the use of video calling and conferencing services to provide live feeds of child abuse.[213]

5.152When these notices were issued, the eSafety Commissioner remarked:

We have seen a surge in reports about this horrific material since the start of the pandemic, as technology was weaponised to abuse children. The harm experienced by survivors is perpetuated when platforms and services fail to detect and remove the content...

We know there are proven tools available to stop this horrific material being identified and recirculated, but many tech companies publish insufficient information about where or how these tools operate, and too often claim that certain safety measures are not technically feasible.

Industry must be upfront on the steps they are taking, so that we can get the full picture of online harms occurring and collectively focus on the real challenges before all of us. We all have a responsibility to keep children free from online exploitation and abuse.[214]

5.153In December 2022, eSafety reported on the companies' responses, with the eSafety Commissioner saying the report 'shows us that some companies are making an effort to tackle the scourge of online child sexual exploitation material, while others are doing very little'.[215] The following is one example of this variation:

Some providers are checking for new or 'unseen' CSEA material, or using technology to detect potential grooming conversations, while eSafety was told by another provider that there is no technology good enough for either purpose.[216]

5.154Mr Dagg, from eSafety, explained how the BOSE process was used to pursue concerns about live streaming, which he said is:

…of enormous concern. That's why we focused our original first set of BOSE notices on Microsoft and Apple in respect of the Teams and Skype products, Microsoft and Apple's livestream products. Unfortunately the answers we got were far from satisfactory. There was very little evidence to show that, particularly in relation to Skype, which we understand from our conversations with law enforcement colleagues, is one of major vectors for livestreamed child exploitation, that they had really invested much at all in terms of finding means by which they can detect and cease livestream transmission. We heard from Microsoft that it takes them on average two days to respond to a complaint about child sexual exploitation material appearing via the Skype service, which we think is too long, particularly when you compare it with some other responses through that set of BOSE notices. We think the industry can do more and should be doing more in this area.[217]

5.155A further round of notices was issued in February 2023 to Discord, Google, TikTok, Twitch and X (Twitter).[218] When issuing these notices, the eSafety Commissioner commented:

Our first set of notices sent to companies including Apple, Meta and Microsoft in August last year, revealed many companies were not taking even relatively simple steps to protect children and are failing to use widely available PhotoDNA technology to detect and remove child abuse material.

No solution ever presents itself by ignoring the problem. We need the companies to start turning the lights on, so we can get a true sense of the size and scope of this problem.

Implementation of tools matters as well as the existence of them.

It's time for all members of the online industry to step up and use their financial, intellectual, and technical resources to identify and remove this material from their platforms because even one child sexual exploitation image, is one too many.[219]

5.156In October 2023, eSafety found that Google and X (Twitter) 'did not comply with the Notices given to them to the extent that they were able'. It explained that Google was given a formal warning and X was fined over $600,000:

Google provided answers in certain instances that were not relevant, or were generic, and in other instances provided aggregated information across multiple services where information regarding specific services was required. Google has been given a formal warning, notifying the company of its failure to comply and warning against non-compliance in the future.

eSafety considered Twitter’s failure to comply to be more serious. In some instances Twitter failed to provide any response to the question, such as by leaving the boxes entirely blank. In other instances, Twitter provided a response that was otherwise incomplete and/or inaccurate. Despite Twitter being given further opportunities to provide the information required by the Notice, significant gaps remain. eSafety has given a service provider notification to Twitter, notifying it of the non-compliance. The service provider notification has also been published on eSafety's website. Twitter has also been given an infringement notice for AUD$610,500 for its noncompliance. Twitter has 28 days to request the withdrawal of the infringement notice or to pay the penalty. If Twitter chooses not to pay the infringement notice, it is open to the Commissioner to take other action.[220]

5.157Similar to its report on the first round of notices, eSafety's October 2023 report:

…again shows significant variation in the steps being taken by providers to protect users and the wider Australian public. Similar to the 2022 report, eSafety discovered that there is no common baseline in terms of the safety protections in place. Detection tools are not being used consistently across different services, even where multiple services are owned by the same parent company.[221]

5.158eSafety pointed out that 'where the Notices have provided information of potential failure to implement the Expectations, eSafety will be engaging with those providers to understand their plans to address these safety shortcomings and any obstacles to compliance'.[222]

5.159Since eSafety published its report, it has been reported that X (Twitter) failed to pay the fine by the deadline. An eSafety spokesperson reportedly said that eSafety would consider further steps on the matter. It has also been reported that X has sought judicial review of these matters in the Federal Court of Australia.[223]

5.160In November 2023, the Minister for Communications, theHonMichelleRowlandMP, announced public consultation on 'amendments to the existing BOSE Determination to ensure it remains relevant in response to new and emerging harms'. Submissions are sought by February2024. The minister's media release said:

The expanded BOSE Determination will include – among other things:

Ensuring the best interests of the child is a primary consideration for all services used by children, and that services should implement measures to prevent children accessing age-inappropriate content;

An express focus on minimising the creation and amplification of unlawful or harmful material through generative artificial intelligence;

Developing processes for detecting and addressing hate speech which breaches a service’s terms of use; and

That service providers publish regular transparency reports to explain steps being taken to keep Australians safe online.[224]

Industry codes and standards

5.161As well as the process for BOSE notices, the Online Safety Act provides for 'representatives of sections of the online industry to develop new industry codes relating to the online activities of participants in those sections of the online industry'. eSafety said the codes are 'intended to regulate illegal and restricted content, including CSEM'.[225] eSafety also explained that the Act provides for the eSafety Commissioner to decide 'whether the codes provide appropriate community safeguards' and, if a code does not, then the Commissioner 'is able to determine industry standards'.[226] eSafety explained:

Once codes or standards are in place, eSafety will be able to receive complaints and investigate potential breaches which will be enforceable by civil penalties, enforceable undertakings and injunctions to ensure compliance.[227]

5.162In September 2021, eSafety published a position paper 'to help industry in the code development process' and, on 11 April 2022, eSafety formally requested the development of codes to address class 1 content (which includes CAM). After conducting public consultation, industry associations submitted draft codes in November 2022. In February 2023, eSafety asked the industry to submit revised codes as the draft codes were considered 'unlikely to provide the appropriate community safeguards required for them to be registered'.[228]

5.163Industry submitted revised draft codes on 31 March 2023 and the eSafety Commissioner issued her decision on 31 May 2023:

Five industry codes were found to meet statutory requirements: those covering Social Media Services, App Distribution Services, Hosting Services, Internet Carriage Services, and Equipment.

Two industry codes were found not to meet the statutory test and industry standards will be drafted in their place: those covering Designated Internet Services (covering apps, websites, and file and photo storage services like Apple iCloud and Microsoft One Drive) and Relevant Electronic Services (covering dating sites, online games and instant messaging).

The Commissioner reserved her decision on one code (covering Search Engines) over concerns it was no longer fit for purpose following the integration of generative AI into search engine functions. eSafety requested a revised code within four weeks addressing these concerns.[229]

5.164A key factor in rejecting these codes related to measures for detecting CAM. The eSafety Commissioner was quoted in a media statement:

For example, the Designated Internet Services code still doesn't require file and photo storage services like iCloud, Google Drive, or OneDrive to detect and flag known child sexual abuse material.

We know that online storage services like these are used to store and share child sexual abuse material and pro-terror material between offenders.

And the Relevant Electronic Services code also doesn't require email services and some partially encrypted messaging services to detect and flag this material either, even though we know there are proactive steps they can take to stem the already rampant sharing of illegal content.[230]

5.165In September 2023, eSafety confirmed it will register a revised Search Engine code and said:

The strengthened Search Code now requires services like Google, Bing, DuckDuckGo and Yahoo to take important steps to reduce the risk that material like child abuse material is returned in search results and that AI[artificial intelligence] functionality integrated with the search engines are not used to generate "synthetic" versions of this material.[231]

5.166In November 2023, eSafety published draft industry standards covering Designated Internet Services and Relevant Electronic Services. eSafety has opened public consultation on the draft standards and said that they 'address the production, distribution and storage of "synthetic" child sexual abuse and pro-terror material, created using opensource software and generative AI'.[232]

Other eSafety work

5.167In addition to the above areas of work, eSafety has 'spearheaded the Safety by Design initiative', which 'focusses on the ways technology companies can minimise online threats to users – especially younger users – by anticipating, detecting, and eliminating online harms before they occur'.[233] The initiative is guided by three principles: service provider responsibility; user empowerment and autonomy; and transparency and accountability.[234] As an example of this work, on 21 September 2023 eSafety announced that, along with RMIT University, it was launching 'a free short course for up-and-coming tech leaders to prevent their platforms and services being unintentionally weaponised to carry out abuse'.[235] eSafety submitted that '[e]mbedding safety into online products and services as core features from the very outset of product design is fundamental to the Safety by Design ethos'.[236]

5.168A similar view was presented by the UK National Crime Agency, which posited that '[i]n democratic society we all have an interest in public safety, therefore technology companies should not design services that undermine public safety'. The National Crime Agency proposed that 'companies should not implement changes to their systems until they have fully invested in possible solutions, and until they can ensure that the systems they would apply to maintain the safety of their users are fully tested and effective'.[237]

5.169eSafety also highlighted the importance of international engagement and said it is 'increasingly understood that voluntary actions alone against CSEM have proven insufficient'.[238] In September 2023, eSafety attended the first annual inperson meeting of the Global Online Safety Regulators Network.[239] eSafety launched the network in November 2022 with the UK, Ireland and Fiji; now expanded to include South Korea, South Africa and a group of observers.[240] The network shares 'information, best practice, expertise and experience, to support harmonised or coordinated approaches to online safety issues'.[241] Following the September 2023 meeting, the eSafety Commissioner commented:

Every year we see the scourge of online abuse and exploitation grow and new forms of harm are emerging all the time, such as AI-generated child sexual abuse material. Without coordinated global action we’re limited in our ability to stop it.

Rather than a global 'splinternet' of inconsistent regulation, we need an effective network of global regulators working together to make the online world safer.[242]

5.170The Attorney-General's Department advised that the Five Country Ministerial Forum (which brings together the Five Eyes security ministers) has developed, in partnership with the digital industry, Voluntary Principles to Counter Online Child Sexual Exploitation and Abuse. The department advised that 16 companies have endorsed the principles to date.[243] It also advised that a Washington-based Digital Industry Officer role has been established 'to build strategic relationships with the technology industry, civil society and academia to combat online child sexual exploitation and abuse'.[244]

Footnotes

[1]Office of the eSafety Commissioner (eSafety), Submission 20, p. 15. Also see, for example, AssociateProfessorBenoit Leclerc, Associate Professor Jesse Cale and Professor Thomas Holt, Submission 8, [p. 9]; Department of Home Affairs, Submission 25, p. 18; Australian Institute of Criminology, Submission 37, p. 9.

[2]eSafety, Submission 20, p. 12. Also see evidence from technology companies regarding, for instance, reporting CAM to NCMEC: Communications Alliance and Australian Mobile Telecommunications Association, Submission 13, p. 5; Technology Coalition, Submission 15, [p. 2]; Google, Submission 19, p. 3; Facebook (Meta), Submission 24, p. 2; TikTok, Submission49, [p. 4]; Twitch, Submission 51, p. 4.

[3]eSafety, Submission 44, pp. 11–12.

[4]eSafety, Submission 20, p. 12; eSafety, Submission 44, p. 12.

[5]Julie Inman Grant, eSafety Commissioner, 'We lead global fight to rid web of vile material', The Australian, 8 June 2023, p. 18.

[6]Dr Rick Brown, Deputy Director, Australian Institute of Criminology, Committee Hansard, 15November2022, p. 38.

[7]eSafety, Submission 20, p. 13. Further discussion about the development of tools by technology providers is in, for example, Cyber Security Cooperative Research Centre, Submission 1, pp. 12–13; Associate Professor Leclerc, Associate Professor Cale and Professor Holt, Submission 8, [p. 10]; Communications Alliance & Australian Mobile Telecommunications Association, Submission13, p.5.

[8]Associate Professor Leclerc, Associate Professor Cale and Professor Holt, Submission 8, [p. 10].

[9]Communications Alliance and Australian Mobile Telecommunications Association, Submission 13, p.4; Technology Coalition, Submission 15, [pp. 1–2]; Google, Submission 19, p. 1; Facebook (Meta), Submission 24, p. 2; TikTok, Submission 49, [p. 4]; Twitch, Submission 51, p. 1; Ms Kathleen Reen, Head of Asia-Pacific, Global Government Affairs, X (Twitter), Committee Hansard, 10 August 2023, p. 8.

[10]Google, Submission 19, p. 3.

[11]TikTok, Submission 49, [p. 2]. Also see Ms Ella Woods-Joyce, Acting Director of Public Policy, TikTok, Committee Hansard, 26 July 2023, p. 1.

[12]Facebook (Meta), Submission 24, p. 2. Also see Ms Antigone Davis, Global Head of Safety, Meta, Committee Hansard, 10 December 2021, p. 1.

[13]Ms Emily Cashman Kirstein, Manager, Child Safety, Government Affairs and Public Policy, Google, Committee Hansard, 10 August 2023, p. 3. Also see Google, Submission 19, p. 4.

[14]Facebook (Meta), Submission 24, p. 7. Also see Ms Mia Garlick, Regional Director of Public Policy, Meta, Committee Hansard, 20 February 2023, p.1.

[15]Ms Garlick, Meta, Committee Hansard, 20 February 2023, p.1.

[16]Facebook (Meta), Submission 24, p. 5.

[17]TikTok, Submission 49, [pp. 5–6].

[18]Twitch, Submission 51, pp. 3–4.

[19]Ms Lucinda Longcroft, Director, Government Affairs and Public Policy, Australia and New Zealand, Google, Committee Hansard, 10 August 2023, p. 1.

[20]Ms Reen, X (Twitter), Committee Hansard, 10 August 2023, p. 8.

[21]Twitch, Submission 51, p. 4.

[22]Facebook (Meta), Submission 24, p. 10.

[23]Ms Longcroft, Google, Committee Hansard, 10 August 2023, p. 1.

[24]Ms Garlick, Meta, Committee Hansard, 20 February 2023, p.1.

[25]TikTok, Submission 49, [pp. 4–5]. Also see Ms WoodsJoyce, TikTok, Committee Hansard, 26 July2023, p. 3.

[26]TikTok, answers to questions on notice, 26 July 2023 (received 29 August 2023), [p. 2].

[27]Ms Cashman Kirstein, Google, Committee Hansard, 10 August 2023, p. 5.

[28]Ms Cashman Kirstein, Google, Committee Hansard, 10 August 2023, p. 5. Also see Google, answers to questions on notice, 10 August 2023 (received 31 August 2023), [pp. 1–2].

[29]Technology Coalition, Submission 15, [pp. 1–2].

[30]For example, Cyber Security Cooperative Research Centre, Submission 1, p. 12; TikTok, Submission49, [p. 5]; Twitch, Submission 51, p. 4.

[31]Google, Submission 19, p. 2.

[32]Communications Alliance and Australian Mobile Telecommunications Association, Submission13, p.5.

[33]Meta, answers to questions on notice, 10 December 2021 (received 14 January 2022), [pp. 1–2]; TikTok, Submission 49, [pp. 2–3]; Twitch, Submission 51, pp. 2–3.

[34]TikTok, Submission 49, [p. 2]. Also see Ms Woods-Joyce, TikTok, Committee Hansard, 26 July 2023, p.1.

[35]Mr Nick Pickles, Head of Global Government Affairs, X (Twitter), Committee Hansard, 10August2023, p. 9.

[36]Twitch, Submission 51, p. 3.

[37]Ms Reen, X (Twitter), Committee Hansard, 10August2023, p. 14.

[38]Meta, answers to questions on notice, 10 December 2021 (received 14 January 2022), [p. 2].

[39]TikTok, answers to questions on notice, 26 July 2023 (received 16 August 2023), [p. 1].

[40]TikTok, answers to questions on notice, 26 July 2023 (received 16 August 2023), [pp. 1–2]. Also see Ms Woods-Joyce, TikTok, Committee Hansard, 26 July 2023, pp.2, 4–5.

[41]TikTok, answers to questions on notice, 26 July 2023 (received 29 August 2023), [p. 1].

[42]Meta, answers to questions on notice, 10 December 2021 (received 14 January 2022), [pp. 1–2]. Also see Ms Davis, Meta, Committee Hansard, 10 December 2021, p. 2.

[43]Meta, 'Introducing New Ways to Verify Age on Instagram', Webpage, 23 June 2022, last updated 2March2023, https://about.fb.com/news/2022/06/new-ways-to-verify-age-on-instagram/ (accessed 6September2023).

[44]Twitch, Submission 51, p. 3.

[45]Mr Pickles, X (Twitter), Committee Hansard, 10 August 2023, p. 9.

[46]TikTok, answers to questions on notice, 26 July 2023 (received 16 August 2023), [p. 2].

[47]Uniting Church in Australia, Synod of Victoria and Tasmania (Uniting Church Synod), Submission17, pp. 36–37. Also see Uniting Church Synod, answers to questions on notice, 21November 2022 (received 2 December 2022).

[48]Safe on Social, Submission 50.1, [p. 1].

[49]Collective Shout, Submission 16, pp. 9–10 (citations omitted).

[50]Dr Brown, Australian Institute of Criminology, Committee Hansard, 15 November 2022, p. 35.

[51]Claudia Long, 'Children easily bypassing age verification online, putting them at risk of abuse, eSafety commissioner says', ABC News, 5 September 2023.

[52]House of Representatives Standing Committee on Social Policy and Legal Affairs, Protecting the age of innocence: Report of the inquiry into age verification for online wagering and online pornography, February2020, recommendation 3, pp. 71–72. Also see eSafety, answers to questions on notice, 15November 2022 and 21 November 2022 (received 2 December 2022), [pp. 1–2].

[53]eSafety, Roadmap for age verification and complementary measures to prevent and mitigate harms to children from online pornography, March 2023, p. 16 (citations omitted); also see p. 28.

[54]eSafety, 'eSafety welcomes government's response to age verification roadmap', Media release, 31August2023.

[55]eSafety, Roadmap for age verification and complementary measures to prevent and mitigate harms to children from online pornography, March 2023, pp. 28–29.

[56]Australian Government, Government response to the Roadmap for Age Verification, August 2023,pp.3–5.

[57]Australian Government, Government response to the Roadmap for Age Verification, August 2023,pp.8–9; the Hon Michelle Rowland MP, Minister for Communications, 'Albanese Government takes major steps forward to improve online safety', Media release, 22 November 2023.

[58]eSafety, Submission 20, p. 15.

[59]eSafety, answers to questions on notice, 9 December 2021 (received 24December2021), [p. 2].

[60]Dan Sexton, Chief Technology Officer, Internet Watch Foundation, 'Not all Encryption is the same: social media is not ready for End-to-End Encryption', Blogpost, 14 March 2022, https://www.iwf.org.uk/news-media/blogs/not-all-encryption-is-the-same-social-media-is-not-ready-for-end-to-end-encryption/ (accessed 13 June 2023).

[61]Facebook (Meta), Submission 24, p. 16; Australian Institute of Criminology, Submission 37, p. 8.

[62]Google, Submission 19, p. 2.

[63]Mr Pickles, X (Twitter), Committee Hansard, 10 August 2023, p. 13.

[64]Ms Wood-Joyce, TikTok, Committee Hansard, 26 July 2023, p. 7.

[65]For example, Communications Alliance and Australian Mobile Telecommunications Association, Submission 13, p.5; Digital Rights Watch, Submission 47, pp. 2–3.

[66]Google, Submission 19, p. 2. Also see Ms Cashman Kirstein, Google, Committee Hansard, 10August2023, p. 2.

[67]Facebook (Meta), Submission 24, p. 15. Also see Ms Garlick, Meta, Committee Hansard, 10December2021, p. 3.

[68]Ms Garlick, Meta, Committee Hansard, 20 February 2023, pp. 5–6. For other discussion of this point, see Australian Institute of Criminology, Submission 37, p. 8; Ms Davis, Meta, Committee Hansard, 10December2021, pp. 4, 7–8; Google, Submission 19, p. 2.

[69]Ms Davis, Meta, Committee Hansard, 10 December 2021, p. 7.

[70]Digital Rights Watch, Submission 47, pp. 2–3.

[71]Digital Rights Watch, Submission 47, p. 3 (citations omitted).

[72]Digital Rights Watch, Submission 47, p. 4. Also see Communication Alliance, answers to questions on notice, 10 December 2021 (received 20 January 2022), [p. 3]; Facebook (Meta), Submission 24, pp.15–16.

[73]Digital Rights Watch, Submission 47, p. 1.

[74]For example, Cyber Security Cooperative Research Centre, Submission 1, pp. 10–11, 13; Project Karma, Submission 10, p. 8; Collective Shout, Submission 16, pp. 12–13; Australian Federal Police, Submission 18, p. 15; Department of Home Affairs, Submission 25, p. 3; Queensland Police Service, Submission 29, p. 3; Victoria Police, Submission 30, p. 7; Uniting Church Synod, Submission 40,pp.4–5.

[75]Facebook (Meta), Submission 24, p. 16; Australian Institute of Criminology, Submission 37, p. 8.

[76]International statement: end-to-end encryption and public safety, in Department of Home Affairs, answers to questions on notice, 20 February 2023 (received 7 March 2023). Also see Department of Home Affairs, Submission 25, p. 18.

[77]Department of Home Affairs, Submission 25, p. 3.

[78]For example, Uniting Church Synod, Submission 17, p. 25; Australian Federal Police, Submission 38.1, [p. 3]; eSafety, Submission44, p. 12; MsAnne-Louise Brown, Director of Corporate Affairs and Policy, Cyber Security Cooperative Research Centre, Committee Hansard, 9December2021, p. 9. Also see NCMEC, 'End-to-end encryption: Ignoring abuse won't stop it', Webpage, undated, https://www.missingkids.org/theissues/end-to-end-encryption (accessed 8June2023).

[79]Australian Institute of Criminology, Submission 37, pp. 8–9 (citations omitted).

[80]Dr Brown, Australian Institute of Criminology, Committee Hansard, 15November2022, pp. 38–39.

[81]Dr Brown, Australian Institute of Criminology, Committee Hansard, 15November2022, p. 37. Also see Australian Institute of Criminology, answers to questions on notice, 15 November 2022 (received 2 December 2022), [pp. 2, 5]; Coen Teunissen and Sarah Napier, Australian Institute of Criminology, 'Child sexual abuse material and end-to-end encryption on social media platforms: An overview', Trends & issues in crime and criminal justice, No. 653, July2022, pp. 4–5.

[82]Uniting Church Synod, Submission 40, p. 4.

[83]See Department of Home Affairs, Submission 25, pp. 13–14.

[84]eSafety, Submission 44, p. 12.

[85]Australian Institute of Criminology, Submission 37, p. 8 (citations omitted).

[86]Australian Institute of Criminology, Submission 37, p. 8 (citations omitted).

[87]Australian Federal Police, Submission 38.1, [p. 3].

[88]National Crime Agency, United Kingdom, Submission 31, p. 13.

[89]Director-General Graeme Biggar, United Kingdom National Crime Agency, 'DG NCA GraemeBiggar delivers RUSI's 4th Annual Security Lecture', Speech, 31 October 2023.

[90]National Crime Agency, United Kingdom, Submission 31, p. 3.

[91]National Crime Agency, United Kingdom, Submission 31, pp. 3–4 (paragraph numbers omitted).

[92]ECPAT International, Submission 9, pp. 5–6.

[93]Uniting Church Synod, Submission 17, p. 26. Also see Dr Mark Zirnsak, Senior Social Justice Advocate, Uniting Church Synod, Committee Hansard, 15November2022, p. 8.

[94]Dr Zirnsak, Uniting Church Synod, Committee Hansard, 9 December 2021, pp. 1–2. Also see Uniting Church Synod, Submission 17, pp. 6–8.

[95]Facebook (Meta), Submission 24, p. 15. Also see Ms Garlick, Meta, Committee Hansard, 10December2021, p. 3.

[96]For example, Ms Davis, Meta, Committee Hansard, 10December2021, pp. 3, 5; Ms Garlick, Meta, Committee Hansard, 20February2023, p. 3.

[97]Ms Davis, Meta, Committee Hansard, 10 December 2021, p. 3. Also see Facebook (Meta), Submission24, p. 16.

[98]Ms Davis, Global Head of Safety, Meta, Committee Hansard, 10 December 2021, pp 1–2, 3. Also see Ms Garlick, Meta, and Mr Josh Machin, Head of Public Policy for Australia, Meta, Committee Hansard, 20 February 2023, pp. 2, 6; Meta, answers to questions on notice, 10 December 2021 (received 14 January 2022), [p.5].

[99]Ms Davis, Meta, Committee Hansard, 10 December 2021, p. 3.

[100]Ms Davis, Meta, Committee Hansard, 10 December 2021, pp. 3, 4.

[101]Ms Garlick, Meta, Committee Hansard, 20 February 2023, p.3.

[102]Facebook (Meta), Submission 24, p. 16. Also see, for example, Ms Davis, Meta, Committee Hansard, 10 December 2021, pp. 6, 8.

[103]Ms Garlick, Meta, Committee Hansard, 20 February 2023, p.3.

[104]Mr Machin, Meta, Committee Hansard, 20 February 2023, p.5.

[105]Ms Davis, Meta, Committee Hansard, 10 December 2021, p. 4. Also see Facebook (Meta), Submission24, p. 5; Ms Garlick, Meta, Committee Hansard, 20 February 2023, p.6; Communications Alliance, answers to questions on notice, 10 December 2021 (received 20 January 2022), [pp. 2–3].

[106]Google, Submission 19, p. 3.

[107]Ms Cashman Kirstein, Google, Committee Hansard, 10 August 2023, p. 2.

[108]Ms Cashman Kirstein, Google, Committee Hansard, 10 August 2023, pp. 3–4.

[109]Facebook (Meta), Submission 24, p. 15. Also see Ms Davis, Meta, Committee Hansard, 10December2021, p. 8.

[110]Communications Alliance, answers to questions on notice, 10 December 2021 (received 20January2022), [p. 3].

[111]Communications Alliance and Australian Mobile Telecommunications Association, Submission 13, p.5.

[112]Digital Rights Watch, Submission 47, pp. 4–5.

[113]Digital Rights Watch, Submission 47, p. 6 (citations omitted).

[114]ECPAT International, Submission 9, p. 6 (citations omitted).

[115]Uniting Church Synod, Submission 40, pp. 4–5.

[116]Dr William Stoltz, Submission 3, p. 7.

[117]Now the Department of Infrastructure, Transport, Regional Development, Communications and the Arts.

[118]Dr Stoltz, private capacity, Committee Hansard, 10 December 2021, p. 9. Also see Dr Stoltz, Submission3, pp. 2, 7.

[119]Ms Davis, Meta, Committee Hansard, 10 December 2021, p. 4.

[120]Department of Home Affairs, Submission 25.1, p. 3.

[121]Department of Home Affairs, Submission 25.1, p. 4.

[122]Department of Home Affairs, Submission 25.1, p. 4.

[123]Ms Brown, Cyber Security Cooperative Research Centre, Committee Hansard, 9 December 2021, pp.8–9.

[124]National Crime Agency, United Kingdom, Submission 31, p. 14. Also see Uniting Church Synod, Submission 17, p. 26.

[125]Virtual Global Taskforce, 'VGT position on End-to-End Encryption', in National Crime Agency, United Kingdom, Submission 31, attachment 6, p. 2.

[126]Department of Home Affairs, Submission 25.1, p. 4. Also see, for example, ECPAT International, Submission9, p. 6.

[127]Australian Institute of Criminology, Submission 37, p. 10.

[128]Department of Home Affairs, Submission 25.1, p. 4.

[129]For example, Samantha Murphy Kelly, 'Apple abandons controversial plan to check iOS devices and iCloud photos for child abuse imagery', CNN Business, 8 December 2022.

[130]eSafety, Submission 44, p. 12 (citations omitted). Also see, for example, Collective Shout, Submission16, p 13.

[131]Commissioner for Children and Young People, South Australia, Submission 11, p. 5.

[132]Google, Submission 19, pp. 4, 5. Also see Ms Longcroft, Google, Committee Hansard, 10 August 2023, p. 3.

[133]Google, Submission 19, p. 5.

[134]TikTok, Submission 49, [p. 7].

[135]Ms Woods-Joyce, TikTok, Committee Hansard, 26 July 2023, pp. 2–3, 4.

[136]Ms Garlick, Meta, Committee Hansard, 20 February 2023, p. 3.

[137]Ms Longcroft, Google, Committee Hansard, 10 August 2023, p. 2. Also see Google, Submission 19, p.5.

[138]TikTok, Submission 49, [p. 8].

[139]Communications Alliance, answers to questions on notice, 10 December 2021 (received 20January2022), [p. 2].

[140]Communications Alliance and Australian Mobile Telecommunications Association, Submission 13, pp.3–4, 6; Ms Christiane Gillespie-Jones, Director, Program Management, Communications Alliance, Committee Hansard, 10 December 2021, p. 16.

[141]Google, Submission 19, p. 5; TikTok, Submission 49, [pp. 7–8]. Also see, for example, Communications Alliance and Australian Mobile Telecommunications Association, Submission 13, p. 5.

[142]TikTok, Submission 49, [p. 8].

[143]For example, Uniting Church Synod, Submission 17, p. 28; Associate Professor Leclerc, AssociateProfessorCale and Professor Holt, Submission 8, [pp. 9, 10].

[144]Victoria Police, answers to questions on notice, 10 December 2021 (received 7 February 2022), p.2. Also see Victoria Police, Submission 30, p. 6; Victoria Police, Submission 46, p. 3.

[145]Victoria Police, Submission 30, p. 8.

[146]eSafety, Submission 20, pp. 14–15.

[147]eSafety, Submission 20, p. 15.

[148]eSafety, Submission 44, p. 9.

[149]Uniting Church Synod, Submission 17, p. 29.

[150]Uniting Church Synod, Submission 17, p. 30.

[151]Uniting Church Synod, Submission 17, p. 31 (citations omitted).

[152]Ms Longcroft, Google, Committee Hansard, 10 August 2023, p. 4.

[153]Ms Longcroft, Google, Committee Hansard, 10 August 2023, p. 4.

[154]Department of Home Affairs, Submission 25, p. 15.

[155]Department of Home Affairs, Submission 25, p. 16.

[156]Ms Mary-Jane Welsh, Detective Superintendent, Cybercrime Division, Crime Command, Victoria Police, Committee Hansard, 9 December 2021, p. 40. Also see Victoria Police, Submission 30, pp. 5, 6–7; Victoria Police, Submission 46, p. 3.

[157]NSW Police Force, Submission 26, p. 6.

[158]Uniting Church Synod, Submission 17, pp. 22–23. Also see DrZirnsak, Uniting Church Synod, Committee Hansard, 9 December 2021, p. 5.

[159]Communications Alliance, answers to questions on notice, 10 December 2021 (received 20January2022), [p. 2].

[160]Known as the AUS-US Data Access Agreement and, previously, the AUS-US CLOUD Act Agreement.

[161]AttorneyGeneral's Department, Submission 43, pp. 6–7; Department of Home Affairs, 'AustraliaUS CLOUD Act Agreement', Webpage, undated, https://www.homeaffairs.gov.au/about-us/our-portfolios/national-security/lawful-access-telecommunications/australia-united-states-cloud-act-agreement (accessed 26 October 2023).

[162]AttorneyGeneral's Department, Submission 43, pp. 6–7. Also see Ms Ciara Spencer, First Assistant Secretary, Law Enforcement Policy Division, Department of Home Affairs, Committee Hansard, 10December2021, p. 40; Joint Standing Committee on Treaties, Report 204:Agreement between the Government of Australia and the Government of the United States of America on Access to Electronic Data for the Purpose of Countering Serious Crime, December2022.

[163]Mr Machin, Meta, Committee Hansard, 20 February 2023, p.3.

[164]Google, Submission 19, p. 5.

[165]Ms Longcroft, Google, Committee Hansard, 10 August 2023, p. 4.

[166]Communications Alliance and Australian Mobile Telecommunications Association, Submission 13, p. 4.

[167]Communications Alliance, answers to questions on notice, 10 December 2021 (received 20January2022, [p. 2].

[168]Cyber Security Cooperative Research Centre, Submission 1, p. 8.

[169]Detective Superintendent Jayne Doherty, Commander, Child Abuse and Sex Crimes Squad, NSW Police Force, Committee Hansard, 10 December 2021, p. 30. Also see, for example, Ms Welsh, Victoria Police, Committee Hansard, 9 December 2021, p. 40.

[170]Victoria Police, Submission 30, p. 8.

[171]Uniting Church Synod, Submission 17, p. 37.

[172]Collective Shout, Submission 16, p. 11.

[173]eSafety, 'Second set of tech giants falling short in tackling child sexual exploitation material, sexual extortion, livestreaming of abuse', Media release, 16 October 2023.

[174]Victoria Police, Submission 30, p. 5; also see p. 8.

[175]Commissioner for Children and Young People, South Australia, Submission 11, p. 5.

[176]Australian Institute of Criminology, Submission 37, pp. 9–10 (citations omitted).

[177]International Justice Mission, Submission 53, pp. 6–7 (citations omitted).

[178]International Justice Mission, Submission 53, p. 7 (citations omitted).

[179]Collective Shout, Submission 16, p. 5.

[180]ECPAT International, Submission 9, p. 5.

[181]Cyber Security Cooperative Research Centre, Submission 1, pp. 13–14.

[182]Uniting Church Synod, Submission 17, p. 28 (citations omitted).

[183]Uniting Church Synod, Submission 17, pp. 24–25.

[184]Dr Zirnsak, Uniting Church Synod, Committee Hansard, 9December2021, p. 2.

[185]Uniting Church Synod, Submission 40, p. 3.

[186]Uniting Church Synod, Submission 17, pp. 21–22. Also see Dr Zirnsak, Uniting Church Synod, Committee Hansard, 10 December 2021, p. 3.

[187]Uniting Church Synod, Submission 17, p. 33.

[188]Dr Zirnsak, Uniting Church Synod, Committee Hansard, 9 December 2021, p. 2.

[189]Uniting Church Synod, Submission 40, p. 3. Also see Uniting Church Synod, answers to questions on notice, 21November2022 (received 2 December 2022), pp. 1–2.

[190]Uniting Church Synod, Submission 17, p. 36. Also see Uniting Church Synod, Submission 40, p. 3; DrZirnsak, Uniting Church Synod, Committee Hansard, 9 December 2021, p. 1; eSafety, Submission 20, p. 14.

[191]Mr Glen Hulley, Founding Chief Executive Officer, Project Karma, Committee Hansard, 9December2021, pp. 28–29.

[192]Project Karma, Submission 36, pp. 1–2; Mr Hulley, Project Karma, Committee Hansard, 15November2022, p. 4.

[193]Project Karma, Submission 36, p. 4.

[194]Dr Stoltz, Submission 3, pp. 1, 3–4.

[195]Dr Stoltz, Submission 3, p. 5.

[196]Ms Garlick, Meta, Committee Hansard, 10 December 2021, p. 5.

[197]Dr Stoltz, Submission 3, p. 6.

[198]Dr Stoltz, Submission 3, pp. 2, 6.

[199]Mr Toby Dagg, Acting Chief Operating Officer, eSafety, Committee Hansard, 20February2023, p. 21

[200]Mr Pickles, Twitter, Committee Hansard, 10 August 2023, p. 9.

[201]TikTok, answers to questions on notice, 26 July 2023 (received 29 August 2023), [p. 2]. Also see MsWoods-Joyce, TikTok, Committee Hansard, 26 July 2023, p. 7.

[202]Mr Dagg, eSafety, Committee Hansard, 20February2023, p. 21.

[203]eSafety, answers to questions on notice, 20 February 2023 (received 3 March 2023), [p. 3]; Mr Dagg, eSafety, Committee Hansard, 20February2023, p. 21.

[204]For example, eSafety, 'eSafety Commissioner makes final decision on world-first industry codes', Media release, 1 June 2023; eSafety, 'Second set of tech giants falling short in tackling child sexual exploitation material, sexual extortion, livestreaming of abuse', Media release, 16October2023.

[205]eSafety, Submission 44, p. 2.

[206]Ms Bridget Gannon, Assistant Secretary, Online Safety Branch, Department of Infrastructure, Transport, Regional Development, Communications and the Arts, Committee Hansard, 15November2022, p. 15.

[207]Mr Dagg, eSafety, Committee Hansard, 15 November 2022, p. 15.

[208]eSafety, Submission 44, pp. 2, 5–6; Department of Infrastructure, Transport, Regional Development, Communications and the Arts, Submission 35, pp. 1–2. Also see Mr Dagg, eSafety, Committee Hansard, 15 November 2022, p. 12.

[209]eSafety, Submission 44, p. 3. Also see Mr Dagg, eSafety, Committee Hansard, 15November 2022, p.13.

[210]eSafety, Submission 44, p. 6.

[211]eSafety, Submission 44, p. 7. Also see Department of Infrastructure, Transport, Regional Development, Communications and the Arts, Submission 35, pp. 1–2

[212]eSafety, Submission 44, p. 7.

[213]eSafety, Basic Online Safety Expectations: Summary of industry responses to the first mandatory transparency notices, December 2022, p. 1.

[214]Ms Inman Grant, eSafety Commissioner, quoted in eSafety, Basic Online Safety Expectations: Summary of industry responses to the first mandatory transparency notices, December 2022, p. 1.

[215]Ms Inman Grant, eSafety Commissioner, quoted in eSafety, 'World-first report shows leading tech companies are not doing enough to tackle online child abuse', Media release, 15December2022.

[216]eSafety, Basic Online Safety Expectations: Summary of industry responses to the first mandatory transparency notices, December 2022, p. 2.

[217]Mr Dagg, eSafety, Committee Hansard, 20February2023, p. 24. Also see International Justice Mission, Submission 53, p. 4.

[218]eSafety, 'Twitter, TikTok and Google forced to answer tough questions about online child abuse', Media release,23 February 2023.

[219]Ms Inman Grant, eSafety Commissioner, quoted in eSafety, Basic Online Safety Expectations Summary of industry responses to mandatory transparency notices, October 2023, p. 3.

[220]eSafety, Basic Online Safety Expectations Summary of industry responses to mandatory transparency notices, October 2023, p. 4.

[221]eSafety, Basic Online Safety Expectations Summary of industry responses to mandatory transparency notices, October 2023, p. 4.

[222]eSafety, Basic Online Safety Expectations Summary of industry responses to mandatory transparency notices, October 2023, p. 4.

[223]Cameron England, 'Elon Musk's X takes eSafety Commissioner to court after snubbing fine', The Australian, 16 November 2023.

[224]The Hon Michelle Rowland MP, Minister for Communications, 'Albanese Government takes major steps forward to improve online safety', Media release, 22 November 2023.

[225]eSafety, Submission 44, p. 2. Also see Department of Infrastructure, Transport, Regional Development, Communications and the Arts, Submission 35, pp. 1–2.

[226]eSafety, Submission 44, p. 7. Also see eSafety, 'Industry codes and standards', Webpage, undated, https://www.esafety.gov.au/industry/codes (accessed 15February 2023).

[227]eSafety, 'Online industry asked to address eSafety's concerns with draft codes', Media release, 9February2023.

[228]The draft codes were published by the industry associations on 22 February 2023. See eSafety, Submission 44, pp. 5, 7; eSafety, answers to questions on notice, 20 February 2023 (received 3March2023), [pp. 1–2]; eSafety, 'Industry codes and standards', Webpage, undated, https://www.esafety.gov.au/industry/codes (accessed 16 June 2023); eSafety, 'Online industry asked to address eSafety's concerns with draft codes', Media release, 9 February 2023.

[229]eSafety, 'eSafety Commissioner makes final decision on world-first industry codes', Media release, 1June 2023; eSafety, 'Register of industry codes and industry standards for online safety', Webpage, undated, https://www.esafety.gov.au/industry/codes/register-online-industry-codes-standards (accessed 1November2023).

[230]Ms Inman Grant, eSafety Commissioner, quoted in eSafety, 'eSafety Commissioner makes final decision on world-first industry codes', Media release, 1 June 2023.

[231]eSafety, 'Search engine code gets green light with new AI protections', Media release, 8September2023.

[232]eSafety, 'eSafety welcomes feedback on draft industry standards to tackle online child sexual abuse and pro-terror material', Media release, 20 November 2023.

[233]eSafety, Submission 44, p. 5.

[234]eSafety, answers to questions on notice, 28 February 2023 (received 14 March 2023), [p. 2].

[235]eSafety, 'Free Safety by Design course gives future tech leaders competitive edge', Media release, 21September2023.

[236]eSafety, Submission 44, p. 5.

[237]National Crime Agency, United Kingdom, Submission 31, pp. 15–16.

[238]eSafety, Submission 44, p. 13.

[239]eSafety, 'Online safety regulators stand shoulder to shoulder as global network expands', Media release, 13 September 2023.

[240]Ms Inman Grant, eSafety Commissioner, 'Safety by design: protecting users, building trust and balancing rights in a generative AI world' in Australian Strategic Policy Institute: The Strategist, 1November2023.

[241]eSafety, 'The Global Online Safety Regulators Network', Webpage, undated, https://www.esafety.gov.au/about-us/who-we-are/international-engagement/the-global-online-safety-regulators-network (accessed 1 November 2023).

[242]Ms Inman Grant, eSafety Commissioner, quoted in eSafety, 'Online safety regulators stand shoulder to shoulder as global network expands', Media release, 13 September 2023.

[243]Attorney-General's Department, Submission 43, p. 8. Also see Department of Infrastructure, Transport, Regional Development, Communications and the Arts, Submission 35, p. 3.

[244]Attorney-General's Department, Submission 43, p. 10.