Chapter 6 - Platforms

Chapter 6Platforms

Current practice and identified gaps

6.1The introductory chapters to this report outlined the important and central role that social media plays in the lives of many Australians, and that depending on how it is used, social media can either strengthen or undermine democracy. Further, it was noted that because of its depth and reach into nearly every single person's life, social media is being increasingly used as a vector for foreign interference.

6.2While some technology firms are taking some steps towards addressing the issue of foreign interference on their social media platforms, this chapter outlines the overwhelming evidence that nowhere near enough is being done.

6.3An additional specific area of risk is social media platforms which originate from authoritarian states. These issues are discussed separately, in acknowledgement of the particular risks posed.

Platform responsibilities

6.4Social media companies have responsibilities for content published on their platforms. With regard to foreign interference, the Australian Code of Practice on Misinformation and Disinformation (DIGI Code), while voluntary, does require platforms to 'address concerns regarding disinformation and credibility signalling for news content'.[1]

6.5The Australian Communications and Media Authority (ACMA) noted that the DIGI Code only has eight signatories.[2] The ACMA stated there are a number of platforms operating in Australia which chose not to become part of the code, and the ACMA is aware of the 'extent to which there might be misinformation and disinformation which has the risk of contributing to serious harm'. TheACMA stated it had engaged with a number of those platforms over the years and encouraged them to join the code. The ACMA further noted that Digital Industry Group Inc. (DIGI) had updated the code's reporting requirements to reduce the reporting burden for, digital platforms, so it hoped that might result in some platforms electing to sign up to the DIGI Code.[3]

6.6The ACMA noted that it would be looking at information reporting requirements under a similar code operating in Europe, to compare and contrast with what platforms are reporting in Australia.[4]

Platform actions to address interference

6.7Platforms provided details to the inquiry on actions they take to address foreign interference. Platforms noted their actions are targeted to address any instances of coordinated inauthentic behaviour (CIB), regardless of whether it is for the purpose of foreign interference or is undertaken by non-state actors (oftenfinancially motivated).

Meta actions

6.8In its submission, Meta (which owns Facebook, Instagram, WhatsApp and the newly launched Threads) outlined for the committee the three main strategies it uses to combat foreign interference via its platforms, which include:

automated defences that leverage machine learning to detect and block bad actors: such as systems that find and block millions of fake accounts every day, most within minutes of creation;

threat intelligence analysts who hunt for and disrupt CIB not caught by the automated defences; and

collaboration by sharing information with counterparts in industry and with government, civil society and the media.[5]

6.9The Department of Home Affairs (Home Affairs) noted the work undertaken by Meta, submitting that between 2017 and 2022 'Meta disabled and publicly reported on over 200 covert influence operations that violated its policies' and that these operations 'originated from over 60 countries and targeted both foreign nations and their own domestic public debate'.[6]

Twitter actions

6.10Mr Nick Pickles, Head of Global Government Affairs, Twitter, told the committee that Twitter has an approach of early intervention when it suspects foreign interference or CIB, and due to the difficulties in definitively attributing content Twitter doesn't necessarily wait for this before taking action:

Often we're looking at behaviour signals. They may be linked to accounts, networks or origin. Obviously many of these actors are trying to hide and obfuscate where they are, so it's not as simple as saying, 'Show me all the Twitter accounts from one country.' We look at content.

6.11Mr Pickles then cited initiatives such as Community Notes, verification and Twitter Blue as additional programs designed to capture CIB. Community Notes was explained as allowing people on Twitter to add context to other people's tweets:

Those tweets can be added to any account, including advertisers, political leaders and prominent influencers. By looking at the people who are adding that data, we're already seeing a significant impact, with an up to 35 per cent reduction in the sharing of those accounts based on context added by other users.[7]

LinkedIn

6.12LinkedIn presents a different risk for foreign interference than other social media platforms, as it is a professional networking site. In February 2023, the Australian Security Intelligence Organisation (ASIO) head MrMikeBurgess noted that ASIO identified nearly 16 000 Australians publicly declaring on professional networking sites they have a security clearance, and 1 000 more revealing they worked in the intelligence community. Mr Burgess said 'these people may as well add "high-value target"' to their profiles.[8]

6.13In recent years, LinkedIn has become heavily abused by threat actors seeking to distribute malware, perform cyberespionage, steal credentials, or conduct financial fraud. In 2022, LinkedIn was used by Lazarus North Korean hacking group for cybercriminal and espionage operations, contacting victims via targeted private messages on the platform.[9]

6.14LinkedIn noted that the largest threat is posed by fake accounts and reported that in 2022 it 'blocked more than 80 million fake accounts worldwide, of which about 400 000 were attributed to Australia'.[10]

Google and YouTube

6.15Google and YouTube provided information to the committee on steps it takes to address foreign interference on its platforms, via the Threat Analysis Group (TAG), a 'global team of analysts and security experts that analyses and counters serious threats to Google and our users, including threats from government backed attackers, serious cybercriminals and information operations'.[11]

6.16In relation to countermeasures taken by YouTube, the committee was told:

We particularly focus on disrupting coordinated influence operations on YouTube. For example, in the first quarter in 2023 we terminated more than 900 YouTube channels linked to Russia and more than 18,000 linked to China, as part of our investigations into coordinated influence operations. These actions are in addition to YouTube's ongoing enforcement of community guidelines, which resulted in the removal of more than 5.6million videos globally in the fourth quarter of 2022.[12]

6.17However, YouTube did acknowledge that even in the recent past (2021) it had not always responded to remove harmful content as fast as it should have.[13]

WeChat

6.18WeChat made a submission to the inquiry, as it did to the similar inquiry of the 46th Parliament, however that submission contained no specific information on how it addresses disinformation, surveillance, censorship, disinformation or CIB on its platform. Notably, WeChat repeatedly declined multiple invitations to participate in a public hearing to answer direct questions from the committee, ignoring that the invitation allowed them to participate via videoconference from any location in the world as other witnesses utilised.[14]

6.19On 4 July 2023, as Chair of the Select Committee, Senator James Paterson sent an open letter to WeChat urging them to reconsider their refusal to appear in a public hearing. On 10 July 2023, WeChat responded to Senator Paterson to reiterate their stated intention of 'working collaboratively with Australian regulators and authorities' and said that it remained 'committed to providing responsive information to the committee in writing' but continued to refuse to front up in an open forum.[15]

6.20In doing so, WeChat has demonstrated contempt for the Australian Parliament and has shown the Australian public that WeChat, and their parent company Tencent, is not genuinely committed to being held accountable for the serious allegations of censorship, surveillance and foreign interference at the hands of the Chinese Communist Party (CCP) which is rife on its platform and is a real concern to WeChat users in Australia.

6.21Senator Paterson, on behalf of the committee, submitted 53 detailed questions to WeChat asking for an explanation on its links to the CCP, whether it censors content critical of the Chinese Government and promotes CCP propaganda, and whether the application is used to surveil and target Australian users critical of the regime, including its human rights record.

6.22WeChat was initially required to respond by 21 July 2023, however WeChat requested via the secretariat an extension of time and was given until 26July2023 to provide answers to these 53 questions.

6.23WeChat submitted its response to these 53 questions on 26 July 2023, however unfortunately WeChat failed to meet the transparency test by not providing direct answers to the questions that were asked of them.

6.24WeChat flatly denies censorship, surveillance and control is being exercised on the platform even though this has been repeatedly and convincingly demonstrated by independent researchers and experts, including those who testified before the committee.

6.25In relation to censorship on the platform, Dr Seth Kaplan testified that WeChat is:

… doing the censoring for the party, and there are what you might call united front and other agents whose content, basically, is being promoted … There is extensive censorship, there is extensive demoting, people are being banned and people are being suspended … I wouldn't say that the CCP is directly doing it. I think Tencent is under instructions on how it must manage the platform.[16]

6.26In relation to surveillance and control on WeChat, Dr Kaplan added:

… you have to understand that this surveillance and control of the [Chinese] diaspora operates on two levels. One is WeChat, and then there are individuals. They could be United Front; they could be other actors…or they may be just rabid nationalists … The non-WeChat avenues for monitoring, constraining and controlling this vulnerable population would be much less effective if there were no WeChat.[17]

6.27On WeChat's links to the CCP, CyberCX submitted that the 'risk that the dominant position of social media platforms is abused by malign actors is heightened for platforms, such as TikTok and WeChat, which are linked to authoritarian governments'. Internet 2.0 argued in their submission that 'we consider that WeChat is under structural pressure to support the CCP's rule of law. We believe this structural market pressure may come at a disadvantage to all other governments' rule of law'.[18]

6.28The Australian Strategic Policy Institute's (ASPI) 2020 report into TikTok and WeChat found that WeChat has 'become the long arm of the Chinese regime, extending the PRC's [People's Republic of China's] techno-authoritarian reach into the lives of its citizens and non-citizens in the diaspora. WeChat users outside of China are increasingly finding themselves trapped in a mobile extension of the Great Firewall of China through which they're subjected to surveillance, censorship and propaganda'.[19]

6.29WeChat's dismissive non-answers are a disingenuous effort to assist the committee by engaging in blatant spin and corporate talking points, and is further evidence for the contempt they have for the Australian Parliament.

TikTok

6.30TikTok provided descriptions of actions it takes against CIB on its platform. TikTok's Community Guidelines 'prohibit content that could mislead our community about civic processes or matters such as public health and safety'. TikTok submitted that its community guidelines which prohibit misleading information are enforced through safety professionals and moderators, technology-based flagging and via a fact checking partnership with AustralianAssociated Press.[20]

6.31TikTok reported that between July and September 2022, it blocked 41 459 159 spam accounts, removed 94 733 447 videos posted by spam accounts and removed 759 044 040 fake followers. TikTok submitted that five networks of operations involved in spam account activity, fake engagement and covert influence operations on-platform were removed in the same period.[21] However, the committee notes this appears to be the global number of removed networks, which may be considered very low when compared against reports of the prevalence of disinformation influence operations.

Ongoing concerns

6.32The rest of this chapter outlines concerns raised around the success of current actions in addressing foreign interference and CIB, as well as the complete absence of action in some cases.

Content moderation

6.33The committee heard that content moderation is a concern to many stakeholders, who were thought that either platforms were not removing malignant content fast enough, or that platforms were engaging in censorship by removing content that should remain, either because of inappropriate requests from governments of or due to problems with fact-checking processes.

6.34The Australian Human Rights Commission (AHRC) noted that platforms use a mix of automated technology and human moderators to address disinformation, and it did not think that current efforts were adequate under the growing volumes of disinformation.[22]

6.35Home Affairs noted a study which found weaknesses in TikTok and Meta moderation of advertisements as part of a controlled experiment:

TikTok failed to catch 90 per cent of ads featuring false and misleading messages about elections in the United States, while Meta was only '…partially effective in detecting and removing the problematic election ads' as part of the experiment. Similar studies conducted globally, indicate social media platform content moderation success can vary from one country to another.[23]

6.36ASIO submitted that it would support measures requiring platforms and service providers to dedicate specific resources to the identification and moderation of disinformation content, published on their services 'commensurate with the volume of content and reach of their services'.[24]

Reporting content for removal

6.37The removal of content which breaches platform standards has two aspects. The first is whether or not platforms are responsive to legitimate reports from individuals, agencies or experts on foreign interference content, and the second aspect, discussed later in this section, is how platforms respond when they receive inappropriate requests for content removal.

6.38The committee heard from community members and expert organisations that reporting instances of foreign interference directly to platforms does not result in swift cooperation to take down content. Mrs Kateryna Argyrou, Co-Chair of the Australian Federation of Ukrainian Organisations told the committee that:

On Facebook, where we can report certain posts, videos and content, we need to have hundreds, if not thousands, of our community members reporting it. Sometimes it gets taken down; in most cases it doesn't … we can report on YouTube taking action, but Facebook, which is a huge platform, to a large extent does absolutely nothing.[25]

6.39Conversely, Meta advised the committee that it takes an approach to 'make ourselves accessible and available to individuals and experts to come to us when they might be experiencing [harassment] is in addition to the broader systems that we have in place to detect this type of behaviour proactively and remove it.[26]

6.40Ms Vicky Xu, a journalist and policy analyst, noted that current processes to remove content take far too long, particularly when the content is directed at intimidating or attacking an individual:

It took a couple of months for anything to actually happen and for YouTube to take action, and, by that time, hundreds of thousands of people had watched the content. I think there are things we can do to establish a kind of mechanism to—I can't speak more frankly than this—kill that content early and faster because that is very damaging. The same goes for platforms like Facebook and Twitter.[27]

6.41Ms Xu noted the balancing act between taking action to protect individuals, versus protecting against undue censorship:

How do we treat this? If the Chinese state makes a whole package of information in order to conduct character assassination against journalists who live in democracies, what is the difference between that and, for example, hate speech or violent content? I don't think that these two categories of harmful information are, by nature, very different. Currently, as I understand it, we don't have the mechanism or a way to get there fast and get rid of this content. I'm very aware that, by speaking like this, I'm encouraging censorship of a kind, but I guess this is a question I'm going to throw back to you: do we need more censorship around these things, or do people like me just drown in misinformation?[28]

6.42Twitter submitted that foreign interference should be approached 'as a broad geopolitical challenge, not one of content moderation', and argued that removal of content alone will not address this challenge.[29]

6.43Extra-territorial censorship through informal channels as foreign interference The committee heard that the system of reporting misinformation and disinformation for removal can itself be used as a tool of foreign interference or censorship and further, that some of those requests are made informally through back channels by foreign governments and are not appropriately disclosed by social media platforms. This is of particular concern when requests are being made after pressure by authoritarian states, and the broader risks posed by those governments are discussed in detail later in this chapter.

6.44As ASIO pointed out to the committee, there are a range of activities which, when conducted openly, would not constitute foreign interference under Australian legislation, but 'could become foreign interference if they involve the hidden hand of a foreign state'.[30]

6.45The committee heard that extra-territorial censorship, such as blacklisting, visibility limiting or manipulation of visibility undertaken by social media, on request or under pressure from foreign governments, is an activity which can 'have a direct impact on the human rights of Australians or those living in Australia, as well as undermining Australia's democracy'.[31] This is particularly the case when requests or demands by foreign governments for social media companies to remove or reduce visibility of content, or de-boost political figures are not disclosed to the public, and where the action taken by social media companies to censor or limit content impacts on the visibility of Australian users and free political discourse in Australia.

6.46As the AHRC submitted to the committee: 'social media platforms, which function as a digital "town square" for free speech and self-expression, are increasingly affected by censorship. In particular, the expansion of the internet and social media has seen increased examples of extra-territorial censorship, where governments seek to suppress speech outside of their national borders'.[32] ''Badiucao, a notable artist who criticises the CCP, informed the committee that in some cases, the content reporting mechanisms were used by authoritarian states to censor dissidents:

For example, on Twitter, they would report certain accounts without any basis. But that kind of campaign was forcing the algorithm from Twitter to respond to these reports and suspend the account targeted. This methodology has been very useful and commonly used by the Chinese government to silence critical voices outside of China.[33]

6.47Human Rights Watch also pointed to whistleblower Peiter Zatko, Twitter's former chief security officer who claimed that the security shortcomings of Twitter constituted '[n]egligence and even complicity with respect to efforts by foreign governments to infiltrate, control, exploit, surveil and/or censor the company's platform, staff, and operations'.[34] Human Rights Watch further noted:

Before Musk's purchase of the platform, Twitter had quickly reacted to requests to protect the accounts of Chinese human rights defenders. After Musk's acquisition, the gutting of the infrastructure and staff that deal with these issues threatens to change that equation between the platform and China.[35]

6.48The AHRC argued that 'transparency is the key to ensuring that censorship (including extra-territorial censorship) does not unduly restrict the exercise of free speech in Australia' and therefore recommended that 'government should mandate that all social media platforms publicly disclose, in detail, all the content they censor and make it an offence to censor content where that has not been publicly disclosed to users'.[36]

6.49Twitter acknowledged in evidence to this committee that that such pressure from foreign governments had regularly occurred and had not been historically disclosed to its users:

Senator Chandler: What if it was an informal request [to take down or censor a tweet] from a government official? Would you let the user know who had made that request?

Mr Pickles: Historically, no. But this is actually one of the areas where we are looking to make significant changes to guard against the kind of pressure that the Twitter files disclosed that we as a service face.[37]

6.50Both Meta and Twitter referred to their transparency measures in place to report occasions where content is removed or restricted based on a legal request from a government or regulator. Meta advised the committee that it makes 'information transparently available relating to requests that we receive from all government around the world for content restrictions or requests for user data'.[38]

6.51Meanwhile, Twitter acknowledged that it may not be aware of all instances of requests to take down or censor content from officials acting on behalf of a foreign government:

Senator Chandler: What if the request to take down or censor a tweet was made by a foreign government? Would you let the user know that it was a foreign government making the request?

Mr Pickles: If it came through a legal process, yes, we would. If it was someone in the Twitter app pressing a reporting button as a regular user, we might not know that person is from a foreign government. But in the case of a legal process – we receive thousands every year – we do communicate to the individual and actually provide them with a copy of the request as well.[39]

6.52The Twitter files referred to in Mr Pickles' evidence to the committee widely documents recent revelations from Twitter itself that governments and government staff often make informal or semi-formal requests or demands to staff at social media companies that content be censored or removed:

Slowly, over time, Twitter staff and executives began to find more and more uses for these tools. Outsiders began petitioning the company to manipulate speech as well: first a little, then more often, then constantly. By 2020, requests from connected actors to delete tweets were routine.[40]

6.53Both Meta and Twitter were unable to provide country level data on government requests outside of formal legal requests to remove or moderate content.

6.54Meta did not directly address the question of whether it removes or restricts content for its users, including Australian users, following a request from a foreign government agency, regulator or official, where the content is not a breach of local law (emphasis added).[41]

6.55Meta referred to its transparency report when asked whether Meta has ever acted on a request for content moderation on its platforms from the RussianGovernment, the Chinese Government or the Iranian Government or any persons connected to those regimes. However, those transparency reports only cite legal requests and not requests made through other informal channels.[42]

6.56Additionally, Meta's transparency report does not include data on content that they removed for violating their policies following a request from a foreign government: 'This report details instances where we restricted access to content based on local law. It does not include content that we removed for going against our policies'.[43]

6.57Twitter was unable to provide detailed information about AustralianGovernment agencies making requests to remove, restrict or limit the visibility of content that were not from a regulator acting to address an alleged breach of law.[44]

6.58While much of the public reporting on this issue focuses on pressure applied by United States (US) authorities to social media companies, companies acknowledge that similar requests outside of formal legal process are regularly made or attempted by foreign governments. When asked what processes are in place to ensure that Twitter's staff are not being target by foreign governments to action certain requests, Twitter told the committee:

Obviously, there are a number of different ways that foreign governments may target us as a company: through our employees is one and through our infrastructure is another. We have a corporate security team working across all of those threat vectors to keep Kathleen, me and our colleagues safe.[45]

6.59Social media companies have extensive flexibility within the application of their content rules to determine when content will, or won't be removed. Meta'sCommunity Standards provides:

In some cases, we allow content – which would otherwise go against our standards – if it's newsworthy and in the public interest. We do this only after weighing the public interest value against the risk of harm, and we look to international human rights standards to make these judgments. In other cases, we may remove content that uses ambiguous or implicit language when additional context allows us to reasonably understand that the content goes against our standards.[46]

6.60Twitter acknowledged that pressure can be further amplified by overly broad and restrictive laws pressuring social media companies to remove or restrict content under poorly defined claims of 'misinformation' or 'social harm':

Globally, in pretty much every region and market we operate in, there is more regulation being proposed. In many countries, the legal definitions being proposed are so broad as to—I believe—pose a chilling risk to free expression. In some countries, it's phrases like 'harmful content'. Misinformation and disinformation are certainly concerns. We're seeing the pendulum swing towards more restrictive laws that don't necessarily have the same checks and balances or the ability to seek redress to the legal process that you would expect. Also, these decisions are being outsourced from governments to private companies to make the exact determination that you just talked about in terms of who decides whether content is illegal. Is it the state, or is it private companies? One trend we're definitely seeing is very broad laws being used to push that decision onto private companies under threat of significant financial and, in some cases, criminal penalties. All of that will have a chilling effect on free speech.[47]

Fact checking

6.61The committee explored the issue whereby identifying misinformation and disinformation relies somewhat on common agreement on what information is factual and/or truthful. The role of fact-checking entities is discussed in greater detail in the chapter on civil society.

6.62The Department of Infrastructure, Transport, Regional Development, Communications and the Arts (Department of Communications) advised that under the proposed changes in the exposure draft of the CommunicationsLegislation Amendment (Combatting Misinformation and Disinformation) Bill2023, 'the judgement or the determination of disinformation and misinformation on a digital platform is a responsibility that is being quite intentionally left in the hands of the digital platforms to manage themselves'.[48]

6.63Meta disagreed with that principle, stating that 'US based multinational technology companies should not necessarily be deciding some of these matters' and outlined that it invested in independent fact checkers, published transparency reports and has fact checks published on the fact-checker websites, 'so there can be debate around what content has been marked for removal or marked as false so that transparency can be applied across all of industry'.

6.64Meta advised that the three firms it engages as fact checkers includes AustralianAssociated Press (AAP), Agence France-Presse (AFP) and RMITFactLab and noted that all of them 'are accredited by the International Fact-Checking Network, which is the gold standard organisation of organisations that follow independent expert editorial approaches in the approach that they take to fact-checking'.[49]

6.65Meta explained that where the fact checking partner identifies false information, Meta does not remove the content but places a warning label over the content so people can click through and see the fact check statement. Meta noted that 'reasonable minds could disagree on what constitutes misinformation and we do not want to find ourselves in the position where we are making those judgements, so that's an area where we do treat speech a little differently, depending on who the speaker is'.[50]

Job cuts

6.66The Department of Communications noted that recent job cuts are an emerging concern for the ability of digital platforms to respond to and counter foreign interference. It further noted that the Australian Government expects digital platforms to still fulfil their commitments under the DIGI Code:

Twitter, Meta, Google, TikTok, Snap, Microsoft and Amazon have all announced significant job cuts, with Apple pausing almost all new hiring. This raises questions about the impact of job cuts on Australian based staff, and moderation, safety, policy and government engagement teams globally. Media reporting indicates that these cuts have impacted moderation and safety teams at some platforms, e.g. job cuts at Twitter have affected about 15 percent of its Trust and Safety Group globally and most of its Australian staff working on public policy and safety issues.[51]

6.67Human Rights Watch submitted that in the past, Twitter had responded quickly to requests to protect the accounts of the Chinese human rights defenders, but stated there is a chance after the recent ownership changes that a reduction in staff numbers 'threatens to change that equation between the platform and China'.[52]

6.68Twitter responded to these allegations directly to the committee, stating:

Contrary to media reporting, when the restructuring was happening the trust and safety team was affected disproportionately less compared to other teams in the company. You'll have heard our new owner speaking about the importance of content moderation and the importance of preventing manipulation. Indeed, he's spoken about how one of the primary motivators for purchasing the company was preventing manipulation.[53]

Transparency

6.69Transparency, or rather the lack of it, was raised consistently by many organisations and expert witnesses as an issue of key concern. Transparency issues were raised across many different domains, such as transparency in reporting content removal, recommender algorithms and actions taken against coordinated inauthentic behaviour.

6.70In relation to transparency through reporting, Meta submitted that it reports regularly on its approach to CIB through quarterly Community Standards Enforcement Reports, Monthly Adversarial Threat Reports, and via the Threat Report—State of Influence Operations 2017–2021.[54]

6.71Twitter told the committee that its transparency reporting was under review, particularly in relation to disclosures, because 'some of the impacted states were reacting with hostility towards both our company and our employees, so we were also starting to face challenges around how we could do that work safely'.[55]

6.72AHRC recommended that one aspect of transparency should be that platforms are required to publicly disclose the content that they censor and it should be made an offence 'to censor content where that has not been publicly disclosed to users'. AHRC argued this measure would help to ensure that censorship (including extraterritorial censorship) does not unduly restrict the exercise of free speech in Australia.[56]

6.73ASPI made a broader recommendation, that there should be an 'explicit social contract … mandating social media platforms to disclose state-backed influence operations and other transparency reporting to increase public awareness'.[57]

6.74Meta and Twitter noted they have transparency measures to report where content is removed or restricted based on a legal request from a government or regulator, but noted they might not always know when a request is from a government actor. Mr Nick Pickles of Twitter told the committee:

If it was someone in the Twitter app pressing a reporting button as a regular user, we might not know that person is from a foreign government.[58]

6.75Mr Pickles acknowledged pressure from some governments not to disclose that a content removal request had been made, and noted that strengthened transparency requirements would be a protective factor against such pressure:

I think transparency is a phenomenal deterrent for pressure being applied. If a government knows that their request may be [relayed] to the individual, that makes the people submitting that request consider whether they want to do so.[59]

6.76Meta pointed to its transparency reports indicating that users can view content restrictions as a result of government requests. However, if a government request cites a breach of Meta's content policies, the request is not reported coming from a government, skewing the overall numbers of removal requests attributed to governments or state-actors.[60]

6.77ASPI noted the lack of transparency from TikTok in relation to removed networks on its app:

When you look at TikTok's disclosures, there is a claim that a network was taken down, but there is no information about what type of content it had, how it was done or any other information to actually verify and check to make sure they're not just silencing voices. For example, they took down a network operating out of Taiwan but didn't disclose what conversations or what topics those accounts were disclosing. That's concerning from an individual perspective, if the platform are taking down content or campaigns without giving a justification, as to whether it is proportional to the campaign itself. It's an important democratic principle to hold powerful platforms to account and do it transparently.[61]

6.78Mr David Robinson, Director of Internet 2.0, noted that transparency around how companies make decisions was critical for individuals to make informed decisions:

Consumers, in the end, will look at the risk-reward situation of their data and make decisions. But, if they can't make decisions because there's no transparency, then the first step in the process is pretty hard to do.[62]

6.79Ms Yaqiu Wang of Human Rights Watch argued there is a role for governments to 'force companies to be transparent about what they are censoring, what they are suppressing, what they are promoting' and said this could be done without adversely affecting how people use apps.[63]

6.80However, Meta noted that in relation to foreign interference 'it may not always be appropriate to have immediate transparency about steps taken to combat foreign interference. There are sometimes very good national security reasons why there may be a delay on that particular type of information, as I suspect security agencies may have advised this committee'.[64]

6.81To address this, Meta advised the committee it has created partnerships with a handful of security expert organisations, which are given information that is not publicly available, such as ASPI. It noted that:

… we will provide them with more detailed information about our take-downs than is available in the adversarial threat reports so that that can inform a lot of their research reports that they then provide publicly to the intelligence community about trends and patterns.[65]

6.82The AHRC recommended the Australian Government should 'establish clear and mandatory requirements and pathways for social media organisations to report suspected foreign interference activities'.[66]

6.83ASPI recommended 'regulation or at least a discussion about how we standardise and create better reporting mechanisms to increase transparency'.[67]

6.84Meta agreed with ASPI on this, and stated that 'there has not been a benchmarking exercise or a standardisation of the metrics across all the companies that are signatories [to the code]'.[68] Meta pointed to a global organisation called the Digital Trust & Safety Partnership, which is conducting 'a really deep-dive project particularly on this topic, about how you design transparency in ways to actually make it effective'. Meta suggested the outcomes of this project could be 'an important contributing factor to what the ACMA and policymakers might want to have in place as part of legislative reform in Australia'.[69]

6.85The ACMA similarly noted that:

We think there is still a way to go in terms of those reports. There's still very limited information about how platforms are going about measuring the effectiveness and impact of the measures they're reporting on under the outcomes of the code, and there is a lack of consistency. So, at the moment, we're not in a position to be able to assess, if you like, how measures that one signatory might be doing look against the impact of similar measures a different signatory might be doing.[70]

6.86However, the ACMA did acknowledge some improvements it had seen in reporting on Australian-specific data relating to misinformation, disinformation and CIB.[71]

Partnerships with civil society

6.87The committee heard that partnerships between platforms and civil society were an important risk amelioration tool. These could be ongoing partnerships, such as fact-checking services, or could simply be that platforms provide the data necessary for expert organisations to undertake their research on foreign interference issues. However, while some platforms currently had such partnerships, there were ongoing concerns how effective they were, particularly in relation to data sharing.

6.88Meta outlined it was a major sponsor of the ASPI, which is among the small group of researchers it shares it shares information with about CIB takedowns. Meta also funded an ASPI review of information-for-hire, focused on theAsia-Pacific region.[72]

6.89Dr Andrew Dowse, Director of RAND Australia noted how difficult it is for researchers to identify incidents of foreign interference. Without access to social media networks' internal identification data, researchers have to attempt to identify patterns in data and noted the need for greater funding:

There are limited funding sources being provided in these areas. So in general we in RAND have been utilising US studies and US funding to be able to do it. I would dearly like to see the Australian government perhaps providing more sources of funding so we can do this research. At the moment all the research we've done in this area in Australia from a RAND perspective has been internally funded.[73]

6.90Professor Rory Cormac from the University of Nottingham noted the need for greater research to understand the scope of the problem, noting that states tend to count only in metrics—how many people were reached, or how many inauthentic newspaper articles were placed. Professor Cormac advised that this 'gives a misleading idea of the impact, because it doesn't tell you what the scope and the scale were'. Professor Cormac went on to detail some of the analysis that is required:

Sometimes [states] work through legitimate actors. Sometimes they work through what were once called unwitting idiots. Sometimes they work through stooges. Sometimes they work through local influencers. So there's a whole spectrum, and we need to map that spectrum, that relationship between the state and the non-state actor or the conduit, to get a good sense of how direct or indirect this data is in order for us to counter that.[74]

6.91Professor Cormac later expanded on this and noted the need for greater analysis of the impacts of CIB:

It's not enough for us as governments to know how many people clicked on that. We then need to know: what did they do next? Did they click on Russian propaganda or terrorist propaganda or Chinese propaganda? Did they then get radicalised? Did they then click on something else? What was their journey? Being able to master that—that's big data and that's tough. That kind of behavioural change as a consequence needs to be measured.[75]

6.92Dr Stoltz advised that this approach would best be done in collaboration with classified intelligence collection to understand the intent of adversaries, so that 'we are informed through potentially classified information about what the core strategic intent of the adversary is, it means that our response can be managed in a more proportionate way and we don't get stuck in a loop of misunderstanding and reacting in inappropriate ways that causes a kind of escalation cycle'.[76]

6.93Twitter flagged that it was open to an approach for providing data to academics where researchers could be 'accredited by governments, in some cases, or public institutions, as a form of vetting to actually protect the data itself from bad actors'.[77] Meta submitted that its ThreatExchange application programming interface (API) facilitates industry efforts to combat cyberthreats, through the threat signal sharing between security professionals.[78]

6.94Professor Cormac noted that to improve access to necessary data for analytical research, there had to be an international response. Governments could work together to persuade firms to share the data, either through the United Nations General Assembly, or via smaller groupings such as the Quadrilateral Security Dialogue (the Quad), the Association of Southeast Asian Nations (ASEAN) or 'or any other combination of multilateral groupings, in order to try and create crossjurisdictional norms and standards that we're satisfied with, and then we can generate that scale to bring these companies into an alignment with an obligatory regime rather than one that is solely voluntary'.[79]

Unverified accounts

6.95Anonymity was also discussed as a concern. In regard to anonymity and identity shielding, this has benefits in that they can protect users' privacy, but they can also be weaponised to enable foreign interference. Some platforms' verification processes—or lack of—allow for fake, imposter, or multiple accounts to be created and operated by one user, which can then be used to conduct online foreign interference activities in an unmoderated way.[80]

6.96Dr Andrew Dowse noted that allowing unverified accounts exacerbates the dangers of foreign interference via social media.[81]

Platforms and authoritarian states

6.97Many submitters and witnesses stressed that platforms which originate from authoritarian states pose additional security concerns that are not applicable to platforms that operate under free market principles within a liberal democracy. In particular, discussions centred on TikTok and WeChat.[82]

6.98CyberCX warned the committee that the collection and aggregation of the personal information of millions of Australians and the capacity for malign foreign actors to manipulate content consumed by Australians at scale is a heightened risk when using platforms that are linked to authoritarian governments, including TikTok and WeChat. As CyberCX pointed out, Chinese corporations are required by Chinese law to cooperate with Chinese government authorities which operate 'without the oversight and transparency mechanisms of rule-of-law democracies'.[83]

6.99ASIO highlighted the additional security concerns that authoritarian governments 'are able to direct their country's institutions—including media, businesses and society—to support intelligence or foreign policy objectives in ways which are not acceptable in a democracy'.[84]

6.100Internet 2.0 likewise argued that the 'free market principles' under which we allow companies to operate in a liberal democracy should not extend to authoritarian-based companies:

They should not be allowed to gain profit and capital at the expense of our national security. Ultimately, they do not reside in our society. Social media companies that reside in our society will contribute and attempt to uphold democratic values. These companies should be defended against social media companies that align with our authoritarian competitors, because they flourish only as a result of our free-market principles and democratic system.[85]

6.101Ms Shanthi Kalathil, a former advisor on the US National Security Council, told the committee that these platforms are increasingly the propaganda tool of choice of authoritarian states:

… network cross-border influence operations by authoritarian actors have grown in sophistication and effectiveness in recent years, shaping narratives and targeting democratic institutions during important geopolitical moments. These include but are certainly not limited to election periods. While not disavowing more traditional forms of propaganda, authoritarian regimes are increasingly using digital influence operations as a method of censorship and manipulation, flooding the information space with false or misleading narratives designed to crowd out independent voices and expertise.[86]

6.102Ms Kalathil further advised the committee that 'we should not simply examine authoritarian links to social media platforms; we should put them in the context of the history and political systems from which they emerged, particularly with respect to the legacy of how those systems understand and manage information' and noted that the implications of data collection by those authoritarian states was concerning, particularly in light of their security agencies attempting to harness big data for predictive policing.[87]

6.103ASPI used the above approach and put TikTok and ByteDance into their political context, by reminding the committee of the obedience to the CCP forced upon the ByteDance founder:

For TikTok's parent company ByteDance, this authoritarian approach has included compelling the company's founder Zhang Yiming to make an abject apology in a public letter for failing to respect the Chinese Communist Party's 'socialist core values' and for 'deviating from public opinion guidance'—one of the CCP's terms for censorship and propaganda.[88]

6.104ASPI advised the committee that this poses a regulatory challenge for government, such that platforms originating from authoritarian states do not face the same oversight in their home nations:

Policymakers and legislators should ensure that when addressing the social media challenges (and broader technology risks) they do not disproportionately regulate Western technology platforms while ignoring technology platforms that originate from China or Russia. Democracies must not exacerbate the already unlevel playing field when our ultimate objective is to be able to compete against the authoritarian regimes which are abusing technology to further their strategic interests.[89]

6.105In relation to WeChat, Dr Seth Kaplan, a US political expert, argued that platform is 'basically a narrative machine for the CCP and what it wants to promote'.[90] WeChat remains a wholly-owned subsidiary of Tencent with its headquarters in Shenzen, China.[91]

6.106Much is made about the importance of the WeChat platform to Chineseheritage Australians:

WeChat is essential to communication between Chinese Australians and their families, friends, and business partners in China. This is largely due to the fact that social media platforms such as WhatsApp and Facebook are not allowed in China. WeChat is a necessity, not a choice for many ChineseAustralians.[92]

6.107Professor Wanning from the Australia-China Relations Institute (ACRI) at the University of Technology Sydney and Professor Haiqing, Professor of Media and Communications at RMIT, argued against the notion that the WeChat platform is heavily censored in favour of the CCP, submitting 'WeChat, like Facebook and Twitter, is a social media platform that carries wide-ranging and diversely sourced content; its ideological landscape is fragmented and contested'.[93] ACRI's founding donor and chairman, HuangXiangmo, was later banned from Australia on national security grounds.[94]

6.108The artist Badiucao disagreed and noted that because of the way WeChat operates in Australia, it enables the Chinese government 'to export its censorship outside of China regardless of local law or rights protection in Australian society'.[95]

6.109The Australian Electoral Commission (AEC) noted that although it monitors social media platforms as part of its work on election integrity, it does not monitor WeChat in the same way, because much of the conversation in WeChat occurs in closed channels. The AEC noted that on other platforms it also only monitors comments made in open channels, and it does not monitor individual conversations.[96]

6.110In relation to TikTok, warnings regarding the security of the platform have come from a variety of sources for many years. Most recently, in May 2023 the former head of engineering for ByteDance's US operations, Yintao Yu, alleged TikTok served as a propaganda tool for the Chinese government by suppressing or promoting content favourable to the country's interests. Mr Yu alleged the Chinese government monitored ByteDance's work from within its Beijing headquarters and provided guidance on advancing 'core communist values' and maintained access to all company data, including information stored in the US.[97]

6.111In response to concerns around the platform, on 4 April 2023, the AustralianAttorney-General, The Hon Mark Dreyfus KC MP, banned TikTok from being installed on government devices, except in limited circumstances. The ban has been implemented at a government level in other western countries including the US, the United Kingdom (UK), New Zealand, Canada and France.[98]

6.112The main concerns are China's national security law coupled with the data TikTok collects on its users. The 2017 law requires organisations and citizens to 'support, assist and cooperate with the state intelligence work', with experts saying while there is no definitive proof that China's government has used this law with TikTok, the type of data TikTok collects makes this a high risk.[99]

6.113The following sections outline some of the specific concerns with platforms such as TikTok and WeChat.

CCP national security laws

6.114A key concern raised regarding platforms that originate from authoritarian states such as China, was that the data collected by those apps from millions of Australians can be misused to created targeted foreign interference campaigns.[100] Many submitters and witnesses noted that Chinese national security and data laws meant that private companies are required by Chinese law to assist national security agencies with any request they may make, and furthermore, that those requests must be kept secret. MsShanthiKalathil stated:

… I believe the structure of the party state in the People's Republic of China is such that, were the party to make demands on private companies, there's no legal or technical way for those companies to resist those demands.

I would also say those companies in particular that you mentioned [Tencentand ByteDance] have already been named as particular champions by the state. They been designated as having a special relationship with the state, so they are in a bit of a different category; they are not like some of the smaller ones. Their entwinement and entanglement not just with the state but with the military and with overarching state priorities puts those companies in a special category.[101]

6.115Ms Kalathil further noted that large companies such as ByteDance and Tencent are required to have CCP cells operating within them.[102]

6.116ASPI noted of TikTok that:

The enormous leverage that the CCP has over the company is what drove the company at the time to boost its army of censors by an extra 4,000 people (candidates with party loyalty were preferred) and it's what continues to motivate ByteDance to conduct 'party-building' exercises inside the company.[103]

6.117Internet 2.0 noted further dangers that platforms such as TikTok and WeChat pose:

We assess that social media companies which reside in our authoritarian competitors' orbit will not act in our democratic interests; if we attempt to compel them to remove disinformation, they can resist this as they balance the competing interests of the authoritarian regime. They can resist our sovereign interventions by hosting data, media, and their platforms outside of our jurisdiction.[104]

6.118When asked about these Chinese national security laws and the safety of TikTok data from being shared with the CCP, the Australian Signals Directorate advised the committee that it had 'not seen technical controls that would provide a basis for technical advice that the risks of sharing data are adequately mitigated'.[105]

6.119Furthermore, ASIO advised that 'like any individual in a corporation or in government, if they [employees] have access to sensitive information or if they have the ability to propagate foreign interference, there will always be the potential that they'll be targeted'.[106]

6.120Ms Yaqui Wang of Human Rights Watch also expressed doubt that companies like Tencent and ByteDance would not share data when asked by the CCP because 'their livelihood is hinged to the Chinese Communist Party'.[107]

6.121Ms Lindsay Gorman of the Alliance for Securing Democracy advised the committee that:

Our research has found that, despite the professed separation between the CCP and TikTok, PRC diplomats and state media accounts have gone to bat to TikTok, with messaging campaigns designed to paint the app in a benign light, hype up TikTok's popularity and the consequences of the ban, denigrate the US political system as hostile to business, and portray criticism of TikTok or China as xenophobic, often drawing on familiar tropes.[108]

6.122Ms Ella Woods-Joyce, Acting Director of Public Policy, Australia and NewZealand, TikTok, argued that TikTok Australia has 'never been asked for that user data and we would not provide it if we were asked'. Ms Woods further argued that data is not stored in China but did concede that did not mean data was not retrievable from China, as China based TikTok employees have access to data stored outside of China.[109]

6.123The claim that TikTok would not provide data was not seen as credible by many inquiry participants.[110] Mr Brendan Carr, one of three United States Federal Communications Commissioners, noted that:

The argument that somehow TikTok are going to stand up to the CCP is belied by their inability to do it at any point in time publicly. For instance, they've been asked in US media interviews whether they acknowledge the existence of the Uighur genocide that's taking place in Xinjiang. Their official on TV refused to address it. That same question was posed to their CEO when he testified here recently, and once again he declined to address it. If a TikTok official is not willing to say that the CCP is committing genocide with respect to Uighurs then that gives very little comfort that they would turn them down and say no to them when they do come to them for access. And it's almost separate from access; they're so intermingled here right now in terms of the data that is accessible from inside China, it's not clear that a formal ask would even be necessary in these circumstances.[111]

6.124TikTok Australia did concede that it has engineers located within China who are working on the TikTok app, and further, that those workers were not under the direction or control of Mr Lee Hunter, the General Manager of Operations for TikTok Australia. It was later acknowledged that staff located in mainland China where the Chinese national security and data laws operate, do indeed have access to Australians' data that is harvested by the TikTok app, and can also make changes to the operation of the algorithm which influences what users see.[112]

6.125Mr Hunter, despite having made assurances that Australian TikTok users' data is safe, also acknowledged that he is just 'one of several spokespeople for TikTok in Australia' and that he relies on the expertise of others to ensure his statements are factually accurate.[113]

Use of data TikTok data for CCP purposes

6.126Numerous reports exist that data collected by TikTok has made it into the hands of the CCP and has been put to use for political harassment and interference. For example, the AHRC submitted that 'ByteDance had used TikTok to track the physical location of multiple Forbes journalists who were reporting on the company as part of a covert surveillance campaign' and an 'investigation by BuzzFeed News that concluded Chinabased TikTok employees had access to US user data, and repeatedly accessed that data'.[114]

6.127Mr Hunter, discussed the allegations of surveillance when they first arose, in an opinion piece in the Daily Telegraph newspaper on 24 October 2022, stating that:

Unfortunately, some of the public criticism TikTok has faced recently has not been based on fact but exaggeration, speculation and sometimes pure fabrication.

Take for example a recent report that suggested 'TikTok parent ByteDance planned to use TikTok to monitor the physical location of specific American citizens'.

The fact is … TikTok has never been used to 'target' any members of the US government, activists, public figures or journalists, nor do we serve them a different content experience than other users.

TikTok does not collect precise GPS [Global Positioning System] location information from US (or Australian) users, meaning TikTok could not monitor US users in the way the article suggested.[115]

6.128Later when discussing the case with the committee, Mr Hunter said he disputed 'the characterisation that we spied on journalists' and claimed it was the actions of 'rogue employees [who] are no longer with the business'. MrHunter informed the committee that strengthened policies meant this could not happen again, however, he did not acknowledge that in October 2022 he had declared the breach was not technically possible, and that 'TikTok could not monitor US users in the way the article suggested'.

6.129Mr Hunter did not respond to questions from the committee as to why he should be believed this time—given he had previously declared the breach could not occur and was proven wrong—and as to why he did not seek to correct the record. Mr Hunter's evidence to the committee lacked credibility and candour.[116]

6.130Ms Woods-Joyce of TikTok Australia also discussed the current court case in which former employees disclosed that TikTok used its data to spy on users in Hong Kong, stating that 'we will be contesting the range of allegations that has been made in that matter'.[117]

6.131US Federal Communications Commissioner Brendan Carr told the committee that TikTok 'presents an exceedingly unique threat to national security' for two reasons, being surveillance and foreign influence. Commissioner Carr noted that in relation to surveillance 'TikTok operates as a very sophisticated surveillance technology' and while TikTok claims it collects very little data:

… that's not what the evidence indicates. TikTok collects everything from search and browsing history to keystroke patterns, and its terms of service reserve the right to get biometrics, including face prints and voice prints. For years, TikTok told regulators not to worry and that the data was not stored inside China. In some places it said that the data didn't even exist inside China.

Flash forward to this past summer, and a BuzzFeed News report got internal TikTok communications that showed that in fact everything 'is seen inside China'. We've seen additional evidence pile on from there. For instance, TikTok denied a report that it—a China based team—was using the app to surveil the location of specific Americans. It said that that report lacked journalistic integrity. It turns out that that report was entirely accurate, and TikTok has now admitted that it used the app to surveil the locations of specific Americans—in fact, reporters that have been writing negative stories about TikTok and other reporters located outside the United States as well. Now the US Department of Justice and FBI [US Federal Bureau of Investigation] are looking into that exact type of conduct.[118]

6.132ASPI expressed similar concerns, citing data security and content manipulation, further noting that the leverage the CCP has over TikTok 'exacerbates the former two risks and is unique to TikTok as a major mainstream social media app'.[119]

6.133Internet 2.0 has conducted an in-depth analysis of the TikTok application and the data it collects, and found that:

We've conducted code analysis on their location data twice, first last year and then secondly about two months ago. Their code basically accesses coarse and fine location and has a specific 'get long get lat' string that in essence basically pulls the longitude and latitude from the GPS into the app. TikTok queries that location data specifically. The accuracy is debatable, but that's a technical discussion about whether it's five metres accurate or 20metres accurate. The fact that they actually pull the location is still there. It's another example of the lack of transparency and confusing response from TikTok, in my opinion. We just printed the code location requests and then they denied it. We said, 'This is a picture.' Anyway, they updated their code in the last six months. Previously they could get basically a horizontal and vertical location on the map. But they have updated their location data when we last analysed it, so now they can get bearing, speed and altitude. So if you're in a high-rise building, they can tell what floor you're on now. That wasn't previously in their code, last year.[120]

6.134David Robinson of Internet 2.0 advised that 'we saw no reason, in order to create an effective video platform, to have all that data' and compared the practice to that of YouTube and Instagram, which are less invasive.[121] MrRobinson noted that:

On an even basis, with a transparency scoring system, TikTok is the worst app that we have published in social media.[122]

6.135In response, Ms Woods-Joyce of TikTok said that the report contains 'a range of information that is not necessarily accurate' but was not specific as to what she claimed was not correct.[123]

Manipulation of content algorithms

6.136Another concern raised by inquiry participants was the use of TikTok by the CCP to further its propaganda, including through the manipulation of content algorithms or 'heating' buttons.

6.137In relation to foreign influence, Commissioner Carr noted that:

… the CCP's propaganda arm set up TikTok accounts and, unlike other social media, did not disclose the state media affiliation of those accounts. Those accounts targeted US politicians for criticism ahead of our most recent mid-term elections—accounts that saw millions and millions of views on those videos.[124]

6.138Ms Lindsay Gorman of the Alliance for Securing Democracy noted that 'TikTok's heating button, for example, reportedly allows employees to manually make specific content go viral' and further noted that if the CCP were to direct TikTok to heat certain content, 'it would be extremely difficult to discern this manipulation'.[125]

6.139ASPI submitted that leaked documents have revealed that TikTok instructed moderators to censor videos that 'mention Tiananmen Square, Tibetan independence, or the banned religious group Falun Gong'. TikTok conceded this occurred but insisted 'those documents don't reflect its current policy and that it had since embraced a localised content moderation strategy tailored to each region'.[126]

6.140ASPI further noted that:

In 2022, TikTok blocked an estimated 95% of content previously available to Russians, according to Tracking Exposed, a nonprofit organization in Europe that analyses algorithms on social media. In addition to this mass restriction of content, the organisation also uncovered a network of coordinated accounts that were using a loophole to post pro-war propaganda in Russia on the platform. In other words, at the outset of Putin's invasion of Ukraine, TikTok was effectively turned into a 24/7 propaganda channel for the Kremlin.[127]

6.141ASPI advised the committee that TikTok has the ability to detect political speech, because it 'monitors key words in posts for content related to elections so that it can then attach links to its in-app election centre'. ASPI cited experiments conducted by non-profit group Accelerate Change, which found that using certain political words in TikTok videos decreased their distribution by 66 per cent. ASPI further advised that in 2020, US TikTok executives had noticed that views for certain political content creators were dropping 30 to 40per cent. The executives later 'found out that a team in China had made changes to the algorithm to play down political conversations about the election'.[128]

6.142Ms Woods-Joyce of TikTok told the committee that 'TikTok is not moderated based on political sensitivities'. When asked whether TikTok had done so in the past, she conceded that 'in the past that we have not always made the correct moderation decisions' but did not specify exactly what changes had been made to ensure that did not occur again. This attempted evasion of straightforward questions, which was a consistent feature of Ms Woods-Joyce's evidence before the committee, reflected poorly on her as a witness.[129]

6.143However, ASPI highlighted that manipulation of content is not limited to TikTok. ASPI noted an example in February 2023, where 'Twitter chief executive Elon Musk rallied a team of roughly 80 engineers to reconfigure the platform's algorithm so his tweets would be more widely viewed'. ASPI advised that there is 'clearly a need for all social media companies to be more transparent about how changes to their algorithms affect the content users receive'.[130]

Evading oversight

6.144Commissioner Carr also raised concerns with the manner in which TikTok engages with regulation and oversight:

When a senior TikTok official—I believe the COO [Chief Operating Officer]—was testifying here in our Senate a few months ago they were asked point-blank, 'Do you share sensitive US user data with personnel—TikTok or ByteDance—in Beijing that are themselves members of the CCP?' That official declined to answer that question. I think a lot of these concerns could have been put to rest if they had answered pointedly and either said no or, frankly, perhaps said, 'Yes, and here are the protections put in place.' The evasive nature of these answers over years is deeply concerning.[131]

6.145Mr David Robinson of Internet 2.0 raised similar concerns. His organisation highlighted a number of concerns with TikTok's data collection, and noted in their response that:

TikTok have never actually been transparent. TikTok have never said, 'Internet 2.0 is wrong because of this piece of code and it actually does this and this,' and printed lines of their code in a transparent way. They've always just called us names and spent a lot of money on anti campaigns. From my perspective, every other social media company that we engage with has come back and qualified their position. Companies like Proton and Telegram, when asked, qualified their position and explained exactly what their things do. TikTok have never done this with evidence.[132]

6.146Mr Robinson compared the TikTok response to Proton, saying that Proton 'just sent us the GitHub, and we analysed that and it came out exactly the same because it's a transparent way to do things' and further noted in relation to TikTok's response that 'when people deny stuff, normally you've found their sensitive spot'.[133]

6.147This evasive nature was evident in the manner in which TikTok's Australian executives responded to questions posed by the committee at a public hearing on 11 July 2023. When asked a simple question, Ms WoodsJoyce avoided agreeing with the premise that ByteDance is a Chinese company, despite the submission stating TikTok is proud of 'our company's Chinese heritage'. Further, Ms Woods-Joyce avoided answering whether Bytedance is headquartered in China, stating that 'ByteDance has operations around the world', later conceding the existence of 'offices in China'.[134]

6.148These answers are consistent with news reports that leaked 'internal documents show TikTok is actively instructing employees to "downplay the China association" to deflect growing media scrutiny over the video-sharing app's Beijing-based owner'.[135]

6.149This is further highlighted by the March 2023, resignation of a senior ByteDance executive from the Australian board of TikTok, amid the viral video platform's efforts to distance itself from its Chinese parent company.[136]

Data safety

6.150This issue of data collection by the CCP has been one of increasing concern to the global intelligence community. Mr Richard Moore, head of the UK's intelligence agency, MI6, recently warned that:

China has added to its immense datasets at home by hoovering up others abroad, and the Chinese authorities are not hugely troubled by questions of personal privacy, or individual data security. They're focused on controlling information and preventing inconvenient truths from being revealed.[137]

6.151Ms Yaqiu Wang of Human Rights Watch noted that all social media platforms gather data, but while most do so for the purposes of 'selling you more things', Chinese-based companies were subject to the control of the CCP, which 'has political goals beyond profits'.[138]

6.152Internet 2.0 argued for actions to keep data safe from authoritarian regimes:

As social media companies are the primary producer of high quality social, psychological, political, military, and economic data, liberal democracies must move to defend this data by limiting the access to it by authoritarian regimes.[139]

6.153The approach of data localisation, where user data is housed in databanks outside of mainland China, is currently being considered by platforms in the European Union (EU; Project Clover) and the US (Project Texas). Details were outlined in Chapter 3. Experts advised this committee that a similar approach could be taken here, but qualified that it would not be a 'silver bullet' solution.

6.154Mr David Robinson of Internet 2.0 noted that while a single mechanism would not work, he believed that 'some form of local data storage is required' because 'we lose the ability to control how that data is regulated once it leaves our shores'. Mr Robinson further explained the difficulties in regulating offshore data:

… the Australian Privacy Act is limited to Australian waters, basically. As soon as that data leaves Australia, it's up to you to have an alliance or an agreement with the other country that your law still applies. If you sell the data to China and it's in China, Chinese law applies.[140]

6.155Mr Robinson noted the difficulties this presents during an election, where if data is sitting in a country that doesn't cooperate with Australia, the ability to have it taken down quickly is reduced and the statements have longer to have an impact. If data is sitting in Australia or the US, it can likely be taken down within hours.[141]

6.156Mr Robinson further advised that Australia should look to data collection laws in the EU:

… the GDPR [General Data Protection Regulation] is probably a better system of regulating how data is used. The ability of Western social media companies to collect and sell users' data in Europe is far less than in Australia and the US—so I can't buy sophisticated advertising data easily in European jurisdictions as I can in Australia and the US, because of the way the act works.[142]

6.157Ms Shanthi Kalathil noted that data collection is not only occurring via social media platforms. She advised that Chinese participation in smart city platforms in a number of countries also provided an opportunity for data collection, and in particular the gaming sector is a place where significant amounts of data can be collected, but that sector 'frequently does not get included in the kinds of conversations that we're having here today'.[143]

6.158Ms Kalathil agreed that data localisation may help to mitigate the risk, but noted that 'you can't use primarily a technical or legal solution to what ultimately is a political problem'.[144]

6.159Commissioner Carr concurred that data localisation [Project Texas in the US] would not provide sufficient protections, and cited a relevant news article:

The Buzzfeed news article got leaked internal audio from TikTok DC [District of Columbia] officials saying that, even after Project Texas was put in place, 'It remains to be seen if product and engineering'—meaning Beijing—'would continue to have access to the data.' If TikTok officials themselves don't believe that Project Texas is actually going to safeguard data against access from inside China, I think we should take their word for it.[145]

6.160Commissioner Carr recommended the best approach would be a ban or genuine divestiture, for example where ByteDance would be forced to sell TikTok to a non-Chinese, mainland US company.[146]

6.161ASPI also expressed doubt at the usefulness of data localisation:

TikTok will always say that TikTok user data is stored in Singapore or in the United States. That is really immaterial if that data continues to be accessed from Beijing.[147]

Impact on Chinese-language media

6.162A key concern of experts was the chilling effect that WeChat can have on independent Chinese-language media. Dr Kaplan noted that these independent media outlets 'because of advertising and WeChat and possibly because they want to do business in China, are self-censoring completely'.[148]

6.163Human Rights Watch raised similar concerns that local Chinese-language media self-censors so their content will be allowed on WeChat:

Then, as a local news website operator, because your account is on WeChat, you have to think about what can be published on WeChat. So you selfcensor … A diaspora outlet that caters to the diaspora has to go through Beijing's censors. Even what's going on in Australia is censored by Beijing. That's the news side.[149]

6.164Dr Kaplan, informed the committee that it is not just federal issues of concern, but that in the US 'the state level may be more vulnerable than the federal level'. Dr Kaplan noted that 'WeChat is managing information, helping to mobilise Chinese speakers and then seeking coalition partners among the non-Chinese-language-speaking civil society, and all this, basically, is direct interference in the politics of the country'.[150]

TikTok bans

6.165On 17 March 2022, New Zealand's Parliamentary Service announced it was blocking TikTok from all parliamentary devices. This followed a ban in November 2022 preventing members of New Zealand's Defence Force from having the app on government issued phones, joining the Ministry of Education, Ministry of Foreign Affairs and Trade, Department of Prime Minister and Cabinet, Corrections, Police, Treasury, Ministry of Justice, and Ministry of Primary Industries.[151]

6.166Canada's Federal Government banned TikTok on government-issued devices in February 2023.[152]

6.167When questioned on the decisions to ban TikTok and security concerns raised by the governments of the US, Canada, France, the UK, the European Commission, New Zealand and Denmark Ms Woods-Joyce of TikTok said 'we don't believe there's any evidence to support the view that TikTok is in any way a national security threat, as might be implied by some of the commentary about us'.[153]

6.168However, when the same question was immediately posed to MrWillFarrell, a TikTok US Security Officer, he gave an opposite response and said 'we've never pushed back on the concerns' and then clarified that meant 'we choose not to argue about the concerns'.[154] Mr Farrell was portrayed to the committee as a technical expert who could answer questions about the operation of the app, but was frequently not able to do so. TikTok's decision to make available witnesses to the committee who were not able to assist with obvious questions indicates they did not intend to genuinely engage in the committee process.

6.169The Australian Signals Directorate provided an extensive outline to the committee on the nature of the advice it gave the Australian Government; advice which was a considerable factor in the decision to ban the app from governmentissues devices:

ASD's [Australian Signal Directorate's] technical advice centres on its analysis of the business model which is employed by TikTok which, like other social media platforms, operates on the basis of collecting user information. That information and data collection includes data, user generated content and device data. The application can also identify other applications in use on the device and any browsing activity undertaken within the TikTok web browser. The other data may include users' names, email addresses, phone numbers, contacts, photos, user IDs [identifications], ad [advertising] identifiers and data stored in the device clipboard while TikTok is in use. Device data may include SIM card and IMSI [international mobile subscriber identity] numbers, device type and model, language, time zone, unique device IDs, running of installed apps, IP [internet protocol] addresses, telecommunication provider details and details of the devices on the same network.

Finally—and not so much technical but associated with the technical risks—is the risk of foreign ownership. I heard the committee comment before on the operation of the Chinese national intelligence law which requires that citizens, companies and organisations cooperate, and there is capacity to ask for that cooperation not to be disclosed.[155]

6.170Several Australian Government departments have also implemented restrictions on WeChat, including the Department of the Prime Minister and Cabinet, the Department of Foreign Affairs and Trade, and Home Affairs.[156]

Bans or divestment

6.171There was disagreement among expert witnesses as to whether the best security mitigation for platforms that originate from authoritarian states was usage bans, standards-based regulation or divestment of platforms to companies that would be wholly subject to local laws.

6.172Dr Seth Kaplan argued against an outright ban on WeChat because it would be too difficult to implement. Dr Kaplan instead pointed to setting standards via regulation, but noted that it would be difficult for WeChat to meet those standards 'given the way China operates and the way Tencent is related to the government'. Dr Kaplan further noted that WeChat could not be divested in the same way that TikTok could be.[157]

6.173ASPI did not go so far as to call for a ban of such platforms and applications, but recommended 'bespoke legislation that deals with TikTok and any other emerging major social media apps from authoritarian countries'.[158]

6.174Mr David Robinson of Internet 2.0 argued against banning specific apps because companies could quickly use the source code to make a new app and argued that applying 'legislation and regulation to the software and the code and how it acts is more important than the specific company'.[159]

6.175Ms Yaqiu Wang of Human Rights Watch also argued in favour ofstandards-setting legislation, which included shutdown clauses for noncompliance.[160]

6.176Ms Lindsay Gorman of the Alliance for Securing Democracy argued for forced divestiture of TikTok, given its large user base and incorporation into the wider information environment:

That makes it very, very difficult to contemplate something like a ban. That is why it's my view that a forced divestiture, whereby the concerns could be partially mitigated by the removal of TikTok's Chinese ownership by ByteDance, would allow TikTok to keep operating in democracies without that overwhelming threat of foreign influence and interference.[161]

6.177However, Ms Gorman stressed that this approach should be taken in conjunction with establishing legislated safety frameworks for future apps and platforms 'so that we can get ahead of these threats before they become one of the most popular apps in the country and in the world, and we can address them head-on before getting into the scenario that is not a great one that we find ourselves in today'.[162]

6.178Mr Kenny Chiu, a Canadian MP who was subjected to foreign interference during an election, argued against 'just abruptly banning WeChat from all usage, because the shock that could go around the community and the diaspora would be huge and could sever many ties that they have, be they business ties or, even worse, familial ties with family members'. He advised that western liberal democracies should instead try to 'encourage our citizens, through education and persuasion, to diversify and not rely on one single app'.[163]

6.179The artist Badiucao argued in favour of standards-based regulation that would, at its essence, protect free speech:

I do believe Australians should have regulation to make companies like WeChat follow our laws, respect our way of life and value our freedom of speech. If they fail to obey those regulations then punishment or even banning should be a possible option in order to protect the rights of the people who are living in Australia. This has nothing to do with racism or xenophobia. This is actually to defend universal human rights. This is also how the Chinese diaspora can benefit from this mechanism because now we don't need to be subject to the Chinese government's tyranny and control. Regardless of whether you're using Chinese, English, Mandarin or Cantonese, with any platform we shall have the right to express ourselves instead of being censored, controlled and intimidated by this foreign government in China.[164]

Footnotes

[1]Digital Industry Group Inc (DIGI), Australian Code of Practice on Misinformation and Disinformation, Updated 22 December 2022, p. 2.

[2]Signatories include: Adobe, Apple, Google, Meta, Microsoft, Redbubble, TikTok and Twitter.

[3]Ms Cathy Rainsford, General Manager, Content and Consumer Division, Australian Communications and Media Authority, Committee Hansard, 12 July 2023, pp. 28–29.

[4]Ms Nerida O'Loughlin, Chair, Australian Communications and Media Authority, CommitteeHansard, 12 July 2023, p. 29.

[5]Ms Mia Garlick, Regional Director of Policy, Meta, Committee Hansard, 11 July 2023, p. 2.

[6]Department of Home Affairs, Submission 1, p. 3.

[7]Mr Nick Pickles, Head, Global Government Affairs, Twitter, Committee Hansard, 11 July 2023, pp.42 and 43.

[8]Casey Tonkin, 'Don't put your security clearance on LinkedIn', InformationAge, 23 February 2023.

[9]Veronica Chierzi and Mayra Rosario Fuentes, 'A Growing Goldmine: Your LinkedIn Data Abused For Cybercrime', TrendMicro Business, 28 March 2023.

[10]Mr Joshua Reiten, Senior Director, Legal—Digital Safety, LinkedIn, Committee Hansard, 11 July 2023, p. 16.

[11]Mr Shane Huntley, Senior Director and Global Lead, Google Threat Analysis Group, Google, Committee Hansard, 11 July 2023, p. 35.

[12]Mr Shane Huntley, Google, Committee Hansard, 11 July 2023, p. 35.

[13]Ms Rachel Lord, Senior Manager, Government Affairs and Public Policy, YouTube, Google, Committee Hansard, 11 July 2023, p. 36.

[14]WeChat, Submission 15, pp. 3–4.

[15]Senator James Paterson, 11 July 2023, available at https://twitter.com/SenPaterson/status/ 1678557229883207681 (accessed 26 July 2023).

[16]Dr Seth Kaplan, Private capacity, Committee Hansard, 20 April 2023, p. 13.

[17]Dr Seth Kaplan, Committee Hansard, 20 April 2023, p. 16.

[18]Internet 2.0, Submission 17 Attachment 1, pp. 4–5.

[19]Fergus Ryan, Audrey Fritz, Daria Impiombato, 'TikTok and WeChat: Curating and controlling global information flows', Australian Strategic Policy Institute, 8 September 2020.

[20]TikTok, Submission 30, pp. 4–5.

[21]TikTok, Submission 30, p. 9.

[22]Australian Human Rights Commission, Submission 9, p. 7.

[23]Department of Home Affairs, Submission 1, p. 3.

[24]Australian Security Intelligence Organisation, Submission 2, p. 6.

[25]Mrs Kateryna Argyrou, Co-Chair, Australian Federation of Ukrainian Organisations, CommitteeHansard, 21 April 2023, p. 30.

[26]Mr Josh Machin, Head of Public Policy, Australia, Meta, Committee Hansard, 11 July 2023, p. 4.

[27]Ms Vicky Xu, Senior Fellow, Australian Strategic Policy Institute, Committee Hansard, 21 April 2023, p. 36.

[28]Badiucao, Private capacity, Committee Hansard, 21 April 2023, p. 36.

[29]Twitter, Submission 20: Foreign Interference through Social Media inquiry (46th Parliament), p. 2.

[30]Australian Security Intelligence Organisation, Submission 2, p. 4.

[31]Australian Human Rights Commission, Submission 9, p. 15.

[32]Australian Human Rights Commission, Submission 9, p. 15.

[33]Ms Vicky Xu, Australian Strategic Policy Institute, Committee Hansard, 21 April 2023, p. 36.

[34]Human Rights Watch, Submission 12, p. 8.

[35]Human Rights Watch, Submission 12, pp. 7–8.

[36]Australian Human Rights Commission, Submission 9, p. 16.

[37]Mr Nick Pickles, Twitter, Committee Hansard, 11 July 2023, p. 48.

[38]Meta, answers to written question on notice 1-11 (received 19 July 2023), p. 1–2.

[39]Mr Nick Pickles, Twitter, Committee Hansard, 11 July 2023, p. 48.

[40]Matt Taibbi, The Twitter Files, 13 April 2023, https://twitterfiles.substack.com/p/1-thread-the-twitter-files (accessed 27 July 2023).

[41]Meta, answers to written questions on notice 1-11 (received 19 July 2023), p. 1–2.

[42]Meta, answers to written question on notice 1-11 (received 19 July 2023), p. 2–3.

[43]Meta, Transparency Center: Content restrictions based on local law,https://transparency.fb.com/ data/content-restrictions/ (accessed 27 July 2023).

[44]Twitter, answers to written question on notice (received 25 July 2023), p. 1.

[45]Mr Nick Pickles, Twitter, Committee Hansard, 11 July 2023, p. 49.

[46]Meta, Transparency Center: Facebook Community Standards,https://transparency.fb.com/en-gb/ policies/community-standards/ (accessed 27 July 2023).

[47]Mr Nick Pickles, Twitter, Committee Hansard, 11 July 2023, p. 51.

[48]Mr Richard Windeyer, Deputy Secretary, Communications and Media Group, Department of Infrastructure, Transport, Regional Development, Communications and the Arts (Department of Communications), Committee Hansard, 12 July 2023, p. 26.

[49]Ms Mia Garlick, Meta, Committee Hansard, 11 July 2023, pp. 6 and 14.

[50]Mr Josh Machin, Head of Public Policy, Australia, Meta, Committee Hansard, 11 July 2023, p. 11.

[51]Department of Communications, Submission 7, p. 3.

[52]Human Rights Watch, Submission 12, pp. 7–8.

[53]Mr Nick Pickles, Twitter, Committee Hansard, 11 July 2023, p. 41.

[54]Meta, Submission 32, pp. 15–16.

[55]Mr Nick Pickles, Twitter, Committee Hansard, 11 July 2023, p. 42.

[56]Australian Human Rights Commission, Submission 9, p. 16.

[57]Australian Strategic Policy Institute, Submission 13, p. 13.

[58]Meta, answers to written questions on notice (1-11) 11 July 2023 (received 19 July 2023); MrNickPickles, Twitter, Committee Hansard, 11 July 2023, p. 48.

[59]Mr Nick Pickles, Twitter, Committee Hansard, 11 July 2023, p. 48.

[60]Mr Josh Machin, Meta, Committee Hansard, 11 July 2023, p. 10; Meta, Content restrictions based on local law,https://transparency.fb.com/data/content-restrictions/ (accessed 26 July 2023).

[61]Mr Albert Zhang, Analyst, Australian Strategic Policy Institute, Committee Hansard, 20 April 2023, p. 21.

[62]Mr David Robinson, Director, Internet 2.0, Committee Hansard, 20 April 2023, p. 35.

[63]Ms Yaqiu Wang, Senior China Researcher, Human Rights Watch, Committee Hansard, 21 April 2023, p. 2.

[64]Mr Josh Machin, Meta, Committee Hansard, 11 July 2023, p. 7.

[65]Ms Mia Garlick, Meta, Committee Hansard, 11 July 2023, p. 9.

[66]Australian Human Rights Commission, Submission 9, p. 8.

[67]Mr Albert Zhang, Australian Strategic Policy Institute, Committee Hansard, 20 April 2023, p. 21.

[68]Mr Josh Machin, Meta, Committee Hansard, 11 July 2023, p. 7.

[69]Mr Josh Machin, Meta, Committee Hansard, 11 July 2023, p. 8.

[70]Ms Cathy Rainsford, Australian Communications and Media Authority, Committee Hansard, 12July2023, pp. 27–28.

[71]Ms Cathy Rainsford, Australian Communications and Media Authority, Committee Hansard, 12July2023, pp. 27–28.

[72]Meta, Submission 32, p. 15.

[73]Dr Andrew Dowse, Director, RAND Australia, Committee Hansard, 20 April 2023, p. 28.

[74]Professor Rory Cormac, Director, Centre for the Study of Subversion, Unconventional Interventions and Terrorism, University of Nottingham, Committee Hansard, 20 April 2023, p. 37.

[75]Professor Rory Cormac, University of Nottingham, Committee Hansard, 20 April 2023, p. 41.

[76]Dr William Stoltz, Private capacity, Committee Hansard, 20 April 2023, p. 41.

[77]Mr Nick Pickles, Twitter, Committee Hansard, 11 July 2023, pp. 42–43.

[78]Meta, Submission 32, p. 15.

[79]Professor Rory Cormac, University of Nottingham, Committee Hansard, 20 April 2023, p. 42.

[80]Australian Security Intelligence Organisation, Submission 2, p. 5.

[81]Dr Andrew Dowse, RAND Australia, Committee Hansard, 20 April 2023, p. 28.

[82]See, for example: Dr Seth Kaplan, Committee Hansard, 20 April 2023, p. 13; Mr Fergus Ryan, Analyst, Australian Strategic Policy Institute, Committee Hansard, 20 April 2023, pp. 19, 21–22; MrDavidRobinson, Internet 2.0, Committee Hansard, 20 April 2023, p. 31; US Federal Communications Commission, Submission 4, p. 2; Australian Strategic Policy Institute, Submission13, pp. 7–11; Rachel Lee, Prudence Luttrell, Matthew Johnson, John Garnaut, Submission34, pp. 3–5.

[83]CyberCX, Submission 16, p. 5.

[84]Australian Security Intelligence Organisation, Submission 2, p. 4.

[85]Internet 2.0, Submission 17, p. 3.

[86]Ms Shanthi Kalathil, Private capacity, Committee Hansard, 20 April 2023, pp. 1–2.

[87]Ms Shanthi Kalathil, Committee Hansard, 20 April 2023, p. 2.

[88]Australian Strategic Policy Institute, Submission 13, p. 10.

[89]Australian Strategic Policy Institute, Submission 13, p. 7.

[90]Dr Seth Kaplan, Committee Hansard, 20 April 2023, p. 12.

[91]Social media users in China use Weixin, operated by a Chinese entity and governed by Chinese law. WeChat in Australia is designed for users outside of mainland China.

[92]Professor Sun Wanning and Professor Yu Haiqing, Submission 37, p. 2.

[93]Professor Sun Wanning and Professor Yu Haiqing, Submission 37, p. 2.

[94]Nick McKenzie and Chris Uhlmann, 'Canberra strands Beijing's man offshore, denies passport', TheSydney Morning Herald, 5 February 2019; James Leibold, 'The Australia-China Relations Institute doesn’t belong at UTS', The Conversation, 5 June 2017.

[95]Badiucao, Private capacity, Committee Hansard, 21 April 2023, p. 35.

[96]Mr Tom Rogers, Electoral Commissioner, Australian Electoral Commission, Committee Hansard, 12July 2023, pp. 34–35.

[98]The Hon Mark Dreyfus KC MP, 'TikTok ban on Government devices', Media release, 4 April 2023; Josh Taylor, 'What does TikTok's ban from Australian government devices mean for its future?', The Guardian, 4 April 2023.

[99]Josh Taylor, 'What does TikTok's ban from Australian government devices mean for its future?', The Guardian, 4 April 2023. See, for example: Simon Jiandan, Submission 36, pp. 1, 3 and 8; US Federal Communications Commission, Submission 4, pp. 1–5; AHRC, Submission 9, pp. 13–14; Rachel Lee, Prudence Luttrell, Matthew Johnson, John Garnaut, Submission34, pp. 67–71.

[100]CyberCX, Submission 16, p. 5.

[101]Ms Shanthi Kalathil, Committee Hansard, 20 April 2023, p. 3. See also: Simon Jiandan, Submission 36, p. 8; Human Rights Watch, Submission 12, pp. 2–3; Internet 2.0, Submission 17 Attachment 1, p. 10.

[102]Ms Shanthi Kalathil, Private capacity, Committee Hansard, 20 April 2023, p. 3.

[103]Australian Strategic Policy Institute, Submission 13, p. 10.

[104]Internet 2.0, Submission 17, p. 6.

[105]Ms Abigail Bradshaw, Deputy Director-General, Australian Signals Directorate, Committee Hansard, 12 July 2023, p. 3.

[106]Mr Mike Noyes, Deputy Director-General, Intelligence Service Delivery, Australian Security Intelligence Organisation, Committee Hansard, 12 July 2023, p. 8.

[107]Ms Yaqiu Wang, Human Rights Watch, Committee Hansard, 21 April 2023, p. 3.

[108]Ms Lindsay Gorman, Senior Fellow for Emerging Technologies, Alliance for Securing Democracy, German Marshall Fund, Committee Hansard, 21 April 2023, p. 8.

[109]Ms Ella Woods-Joyce, Acting Director of Public Policy, Australia and New Zealand, TikTok, Committee Hansard, 11 July 2023, pp. 22 and 23; TikTok, answers to questions on notice (4, 5, 6), 11 July 2023 (received 21 July 2023).

[110]For example, see: Human Rights Watch, Submission 12, pp. 6–8; Internet 2.0, Submission 17 Attachment 1, p. 10; Rachel Lee, Prudence Luttrell, Matthew Johnson, John Garnaut, Submission34, p. 44, 47, 53 and 60–62.

[111]Mr Brendan Carr, Commissioner, United States Federal Communications Commission, CommitteeHansard, 20 April 2023, p. 7.

[112]Ms Ella Woods-Joyce, TikTok, Committee Hansard, 11 July 2023, p. 22 and Mr Lee Hunter, General Manager, Operations, Australia and New Zealand, TikTok, Committee Hansard, 11 July 2023, p. 25.

[113]Mr Lee Hunter, TikTok, Committee Hansard, 11 July 2023, p. 25.

[114]Australian Human Rights Commission, Submission 9, p. 13.

[115]Lee Hunter, 'TikTok boss Lee Hunter asks for be measured on fact not fiction', Daily Telegraph, 24October 2022.

[116]Mr Lee Hunter, TikTok, Committee Hansard, 11 July 2023, pp. 24–25.

[117]Ms Ella Woods-Joyce, TikTok, Committee Hansard, 11 July 2023, p. 30.

[118]Mr Brendan Carr, United States Federal Communications Commission, Committee Hansard, 20April2023, p. 6.

[119]Mr Fergus Ryan, Australian Strategic Policy Institute, Committee Hansard, 20 April 2023, p. 18.

[120]Mr David Robinson, Internet 2.0, Committee Hansard, 20 April 2023, p. 32.

[121]Mr David Robinson, Internet 2.0, Committee Hansard, 20 April 2023, p. 32.

[122]Mr David Robinson, Internet 2.0, Committee Hansard, 20 April 2023, p. 33.

[123]Ms Ella Woods-Joyce, TikTok, Committee Hansard, 11 July 2023, p. 25.

[124]Mr Brendan Carr, United States Federal Communications Commission, Committee Hansard, 20April2023, p. 6.

[125]Ms Lindsay Gorman, Alliance for Securing Democracy, Committee Hansard, 21 April 2023, p. 8.

[126]Australian Strategic Policy Institute, Submission 13, p. 9. See also: Ms Ella Woods-Joyce, TikTok, Committee Hansard, 11 July 2023, pp. 29–30.

[127]Australian Strategic Policy Institute, Submission 13, p. 9.

[128]Mr Fergus Ryan, Australian Strategic Policy Institute, Committee Hansard, 20 April 2023, p.19

[129]Ms Ella Woods-Joyce, TikTok, Committee Hansard, 11 July 2023, pp. 29–30.

[130]Australian Strategic Policy Institute, Submission 13, p. 10.

[131]Mr Brendan Carr, Commissioner, United States Federal Communications Commission, CommitteeHansard, 20 April 2023, p. 7.

[132]Mr David Robinson, Internet 2.0, Committee Hansard, 20 April 2023, p. 31.

[133]Mr David Robinson, Internet 2.0, Committee Hansard, 20 April 2023, p. 32.

[134]Ms Ella Woods-Joyce, TikTok, Committee Hansard, 11 July 2023, p. 21; TikTok, Submission 30, p. 2.

[136]Max Mason, 'ByteDance executive resigns from TikTok Australia board', Australian Financial Review, 21 March 2023.

[138]Ms Yaqiu Wang, Human Rights Watch, Committee Hansard, 21 April 2023, p. 6.

[139]Internet 2.0, Submission 17, p. 5.

[140]Mr David Robinson, Internet 2.0, Committee Hansard, 20 April 2023, p. 34.

[141]Mr David Robinson, Internet 2.0, Committee Hansard, 20 April 2023, p. 35.

[142]Mr David Robinson, Internet 2.0, Committee Hansard, 20 April 2023, p. 35.

[143]Ms Shanthi Kalathil, Private capacity, Committee Hansard, 20 April 2023, p. 4.

[144]Ms Shanthi Kalathil, Private capacity, Committee Hansard, 20 April 2023, p. 4.

[145]Mr Brendan Carr, United States Federal Communications Commission, Committee Hansard, 20April2023, p. 8.

[146]Mr Brendan Carr, United States Federal Communications Commission, Committee Hansard, 20April2023, p. 8.

[147]Mr Fergus Ryan, Australian Strategic Policy Institute, Committee Hansard, 20 April 2023, p.22.

[148]Dr Seth Kaplan, Committee Hansard, 20 April 2023, pp. 12 and 17.

[149]Ms Yaqiu Wang, Human Rights Watch, Committee Hansard, 21 April 2023, p. 4.

[150]Dr Seth Kaplan, Committee Hansard, 20 April 2023, p. 12.

[153]Ms Ella Woods-Joyce, TikTok, Committee Hansard, 11 July 2023, p. 30.

[154]Mr Will Farrell, Security Officer, TikTok US Data Security Inc, Committee Hansard, 11 July 2023, pp.30–31.

[155]Ms Abigail Bradshaw, Australian Signals Directorate, Committee Hansard, 12 July 2023, p. 3.

[157]Dr Seth Kaplan, Private capacity, Committee Hansard, 20 April 2023, pp. 13–14.

[158]Mr Fergus Ryan, Australian Strategic Policy Institute, Committee Hansard, 20 April 2023, p. 19.

[159]Mr David Robinson, Internet 2.0, Committee Hansard, 20 April 2023, p. 33.

[160]Ms Yaqiu Wang, Human Rights Watch, Committee Hansard, 21 April 2023, p. 3.

[161]Ms Lindsay Gorman, Alliance for Securing Democracy, Committee Hansard, 21 April 2023, p. 9.

[162]Ms Lindsay Gorman, Alliance for Securing Democracy, Committee Hansard, 21 April 2023, p. 9.

[163]Mr Kenny Chiu, Private capacity, Committee Hansard, 21 April 2023, p. 18.

[164]Badiucao, Private capacity, Committee Hansard, 21 April 2023, p. 35.