Chapter 3

Mechanisms

3.1
The spread of foreign interference on social media uses significantly different mechanisms to other forms of cyber-attacks. Foreign interference can occur via direct means, such as attempts at brute-force hacking to gain access to Australian systems, attempts to gain access through deceiving users, and using distributed denial-of-service attacks. These methodologies are used to gain access to Australian information that may be of use or interest abroad and to otherwise disrupt Australian services.
3.2
Foreign interference can also occur via indirect means, such as through commonly used and trusted social media platforms. This form of foreign interference and disruption by social media is much harder for end users to detect and includes coordinated inauthentic behaviour (CIB), algorithmic curation, microtargeting, the use of bots, human-driven interference and automated moderation. This chapter describes how these mechanisms operate and their potential impact.

Coordinated inauthentic behaviour

3.3
The term 'coordinated inauthentic behaviour' (CIB) is a classification used by many social media companies to define undesirable behaviour. For example, Facebook describes CIB as:
…coordinated efforts to manipulate public debate for a strategic goal where fake accounts are central to the operation.1
3.4
Facebook further notes that CIB does not necessarily need to come from a government actor:
There are two tiers of these activities that we work to stop:
1) coordinated inauthentic behavior in the context of domestic, non-government campaigns and
2) coordinated inauthentic behavior on behalf of a foreign or government actor.2
3.5
A key element of CIB is that the source of the coordinated effort seeks to hide their identity. Facebook notes that the term CIB also includes 'any coordinated effort to mislead the public about who's behind an operation through the use of fake accounts and deceptive behaviour'.3 Similarly, Mr Lee Hunter, General Manager, TikTok Australia and New Zealand, TikTok Australia, described CIB as individuals and groups who 'disguise their purpose and their identity to influence matters of importance to Australia'.4
3.6
This report utilises the term CIB to refer to activities that include coordinated foreign interference through social media, as well as other organised attempts to spread disinformation.

Algorithmic curation

3.7
The use of algorithms on social media platforms is ubiquitous. An algorithm is a program that runs in the background on social media platforms, analysing users' behaviour and tailoring users' social media feed. Algorithms select content based on factors such as a user's past online activity, social connections, and their location.5 Social media platforms use algorithms to build profiles, which are not visible to users, that collate available information about users for corporate use, including selling targeted advertising.
3.8
While algorithms are primarily used to maintain user interest, they tend to default to highly attention-grabbing content. Responsible Technology Australia described how algorithms can be used to promote extreme and inflammatory material:
As the primary aim of these platforms is to maximise user time spent on them (to increase their advertising revenue potential), the algorithms are incentivised to serve material that is calculated to engage users more. This content tends to be more extremist or sensationalist or untrue - as it has been shown to be more captivating. This opens the door for foreign agents to seed inflammatory and sensational content that users engage with out of outrage or support, and is then amplified by the algorithms which see all engagement as warranting amplification - regardless of the nature of the content.6
3.9
Additionally, algorithms used on social media platforms may encourage engagement with inflammatory content. As users' feeds are tailored towards the individual's interests and political beliefs, this can result in an 'echo chamber' effect and progressive engagement with more extreme content.7
3.10
Algorithms used by social media platforms to target material (including advertisements) are not generally publicly available, nor are the 'nudges' delivered by these algorithms transparent to users. In reference to Facebook, Allens Hub for Technology, Law and Innovation; the Datafication and Automation of Human Life; and the Society on Social Implications of Technology (Allens Hub et al) submitted that 'each user knows what they see on Facebook, but no individual is privileged to see the underlying algorithm driving what others are seeing'.8

Microtargeting

3.11
In extreme cases, algorithmic curation of social media feeds can result in users inhabiting an online environment that is not reflective of realworld conditions. While the use of algorithms described above could be described the 'normal' state of being on social media platforms, microtargeting is the weaponisation of the social media environment in order to further the goals of various actors—corporate, international and even malicious. Responsible Technology Australia described the practice as targeted advertising:
The unfettered approach to data collection has amassed history's largest data sets, allowing advertisers to push beyond normal constraints to deliver direct and granular targeting of consumers. This microtargeting often uses key emotional trigger points and personal characteristics to drive outcomes, which malicious actors can easily exploit to sow distrust, fear and polarisation.9
3.12
Similarly, the Department of Home Affairs observed that:
…social media can selectively deliver tailored messaging through the microtargeting of audiences identified by 'big data' analytics'. This is generally the result of previous behaviours displayed by the user, or based upon the network of people or groups they follow. The delivery of different messages to different audiences is very much a feature of the 'echo chamber' effect which can drive political and social polarisation on social media. This can occur when users are continually receiving self-reinforcing communications based upon their previous online behaviours or social networks, at the expense of different views or information.10
3.13
The Joint Select Committee on Electoral Matters' Report on the Conduct of the 2016 Federal Election described the phenomenon as 'dark advertising', which 'allows groups and companies to target specific individuals or groups (microtargeting), with the goal of shifting their opinions. It is different from normal advertising because it will be seen by only the intended recipient.'11
3.14
The Australia Institute submitted that the lack of transparency associated with microtargeting was particularly problematic, as it 'limits scrutiny and accountability since most of the public never see the message'.12 The Australia Institute further outlined that, by its very nature, microtargeting could focus on highly specific groups:
Micro-targeting gives the ability to target very specific combinations of demographics, psychographics, user preferences, consumption habits and more to profile voters and spread targeted misinformation.13
3.15
Responsible Technology Australia likewise stated that the potential for the microtargeting of individuals in this fashion is completely unprecedented, leaving Australians 'extremely vulnerable to many different forms of manipulation by foreign and malicious actors who wish to threaten the Australian democratic process, exploit our declining trust in our public institutions and generally divide Australian society at large.'14
3.16
Facebook's products enable a high degree of microtargeting. The most notable example of this practice was the activities of Cambridge Analytica, where 87 million users' private data was harvested and used to better target political advertisements in the 2016 United States presidential election and the 2016 Brexit campaign. Facebook was eventually fined in the United Kingdom for its conduct.15 The Office of the Australian Information Commissioner (OAIC) launched Federal Court action against Facebook, considering its collection of information to be in breach of the Privacy Act 1988. The OAIC stated in a media release that:
We claim these actions left the personal data of around 311,127 Australian Facebook users exposed to be sold and used for purposes including political profiling, well outside users' expectations.16
3.17
While Facebook denied that it was undertaking business in Australia, and thus was not in breach of Australia's privacy laws, on 14 September 2020 the Federal Court rejected this assessment and stated that the OAIC had established a prima facie case that Facebook was carrying on business in Australia.17 As at the time of writing, Facebook is appealing this decision.

Bots

3.18
A common method of foreign interference and/or influence on social media is the use of 'bots', which are artificial social media accounts that mimic the behaviour of real users. The United Kingdom's House of Commons' Digital, Culture, Media and Sport Committee's Disinformation and 'Fake News': Interim Report described bots as:
…algorithmically-driven computer programmes designed to carry out specific tasks online, such as analysing and scraping data. Some are created for political purposes, such as automatically posting content, increasing follower numbers, supporting political campaigns, or spreading misinformation and disinformation.18
3.19
Bots can be difficult to identify and remove and are sometimes sufficiently 'intelligent' to interact with accounts operated by real people. These bots can be used to spread rumours, promote individuals and otherwise rapidly spread disinformation online, including undertaking widespread CIB.
3.20
In their submission, the Allens Hub et al described some of the activities that bots are used for online:
Bots may constantly share content from particular accounts, regularly post particular content, or respond to content that meets particular criteria in standard ways. In automating sharing and tagging content, bots are able to amplify the number of people reading a particular post because the number of accounts commenting or sharing content is often relevant in determining visibility of content in individual feeds. Therefore, bots make it seem as if particular viewpoints have more support in a community than what is in fact the case.19

Human labour

3.21
While foreign interference through social media can be attempted by using new technologies, such as bots or microtargeting, human labour is still used to artificially influence discourse on social media and in attempts at CIB.
3.22
Ms Katherine Mansted noted that, at a base level, large cohorts of bots are still driven by a human that has directed them to undertake an activity:
What we've seen so far in the space of disinformation online is very much driven by humans still, even when we talk about the use of bots. Generally, bots are used to amplify messages. Those nefarious messages are still generated and curated by humans, often working under quite obscene conditions in the troll factories of the Kremlin, in the 50 cent army in China and in other places.20
3.23
People are paid to undertake labour online that includes many of the same activities that are undertaken by bots. Mr Alex Stamos, Director, Stanford Internet Observatory, likewise noted that a common assumption is that the spread of disinformation online is solely spread by bots, when human workers are still widely utilised:
One of the core misunderstandings here is how people use the term 'bots' too much: 'That's all automated; it's all bots; it's all automatic.' The vast majority of this activity is being done by humans. We should not underestimate how cheap it is in some of these countries to build up very large armies of people to spend all day doing this work.21
3.24
Such behaviour, whether undertaken by humans or bots, tends to be banned by most social media platforms. Ms evelyn douek noted that 'all of the platforms now have a rule that you can't have 100 people in St Petersburg pretend to be 10,000 Americans or 10,000 Australians'.22
3.25
Additionally, Mr Stamos noted that while there were actors online who were being paid to spread the messages of foreign governments on social media, the vast majority of politically engaged accounts in the Australian social media context were being operated by domestic individuals:
The number of Australians who care about Australian politics vastly outstrips all of the people Australia's adversaries can hire to sit and read and write English. The number of people who are self-motivated or who are part of political parties or political campaigns who want to do this work is much greater than the number of foreign adversaries, so the domestic problem is something we have to really worry about.23

Automated content moderation

3.26
Regarding the removal of misinformation and disinformation from their platform, as well as CIB, most social media platforms utilise a mixture of automated technology and human investigators. Facebook uses both automated technology and human investigators,24 and it described the benefits of this technology to the committee:
Our detection technology helps us block millions of attempts to create fake accounts every day, and we detect millions more often within minutes after creation. We removed 1.5 billion fake accounts between April and June 2020, the majority of these accounts were caught within minutes of registration. Of these, 99.6 per cent of these accounts were detected proactively via artificial intelligence, before they were reported to us.25
3.27
Facebook also has more than 35,000 people who 'work with technology to apply their own experience and knowledge to detect and assess possible networks of CIB'.26 However, Facebook noted that while such automated systems were important, they were not a 'silver bullet':
These systems work very well at scale, but we know that, as with any automated system, sophisticated actors can get past them if they're determined and well resourced. That's why we complement technology with teams of threat intelligence analysts that hunt for and disrupt cybercriminals, [advanced persistent threats] and influence operators.27
3.28
Google likewise noted that on its platform '95 per cent of misinformation videos are flagged by our automated systems' and that it also utilises human analysis to remove CIB.28
3.29
Twitter utilises automated content moderation, with 65 per cent of the detected content being reviewed by its employees.29 Ms Kara Hinesley, Director of Public Policy, Australia and New Zealand, Twitter, described how the two function together:
Globally, between July and December 2020, our internal proactive tools challenged over 143 million accounts for engaging in suspected spamming behaviour, including those engaged in suspected platform manipulation. From the outset of any election, we also establish a dedicated internal cross-functional team to lead our election integrity work.30
3.30
TikTok currently uses a mixture of technology and human moderation for analysing the videos posted to its platform:
We have technology which looks at videos and applies a view through machine learning to try and understand the content therein and to try and either restrict it immediately, on the basis that it goes against our community guidelines, or pass it along to a human moderator so that they can look at that content and decide whether it's fit for being on the TikTok platform.31
3.31
TikTok also noted attempts to circumvent automated content moderation technology were occurring, which posed difficulties.32
3.32
Ms evelyn douek submitted that, during the COVID-19 pandemic, the social media platforms' workers were, in some cases, restricted from attending their workplaces. Subsequent to this, Ms douek noted that 'platforms had to enforce [their] policies and their other rules relying on artificial intelligence tools more than normal'.33 Ms douek described how this particular environment revealed systemic problems with automated content moderation:
After usually promoting these tools as a panacea for content-moderation issues, in a moment of unusual candour the platforms all acknowledged that this greater reliance on AI would result in more mistakes. This came as no surprise to researchers in this space, who for years have been warning about the risks of error, bias and lack of contextual analysis associated with using these tools.34

  • 1
    Facebook, August 2020 Coordinated Inauthentic Behavior Report, 1 September 2020, p. 2.
  • 2
    Facebook, August 2020 Coordinated Inauthentic Behavior Report, 1 September 2020, p. 2.
  • 3
    Mr Nathaniel Gleicher, Global Head of Security Policy, Facebook, Committee Hansard, 30 July 2021, p. 3.
  • 4
    Mr Lee Hunter, General Manager, TikTok Australia and New Zealand, TikTok Australia, Committee Hansard, 25 September 2020, p. 11.
  • 5
    Responsible Technology Australia, Submission 17, pp. 1-2.
  • 6
    Responsible Technology Australia, Submission 17, p. 2.
  • 7
    The Department of Home Affairs, Submission 16, p. 4.
  • 8
    Allens Hub for Technology, Law and Innovation; the Datafication and Automation of Human Life; and the Society on Social Implications of Technology (Allens Hub et al), Submission 19, p. 2.
  • 9
    Responsible Technology Australia, Submission 17, p. 2.
  • 10
    Department of Home Affairs, Submission 16, p. 4. See also Allens Hub et al, Submission 19, p. 2.
  • 11
    Joint Standing Committee on Electoral Matters (JSCEM), Report on the conduct of the 2016 federal election and matters related thereto, November 2018, p. 176. See also Law Council, Submission 18, p. 11.
  • 12
    Australia Institute, Submission 31, p. 17.
  • 13
    Australia Institute, Submission 31.1, p. 2.
  • 14
    Responsible Technology Australia, Submission 17, p. 3. The Law Council of Australia also raised the issue of micro-targeting: see Law Council of Australia, Submission 18, pp. 11-12 and pp. 37-39.
  • 15
    See JSCEM, Report on the conduct of the 2016 federal election and matters related thereto, November 2018, pp. 174-175.
  • 16
    Office of the Australian Information Commissioner, 'Commissioner launches Federal Court action against Facebook', Media Release, 9 March 2020.
  • 17
    Office of the Australian Information Commissioner, 'Commissioner welcomes ruling on Facebook application', Media Release, 14 September 2020.
  • 18
    House of Commons, Digital, Culture, Media and Sport Committee, Disinformation and 'fake news': Interim report, 29 July 2018, p. 19.
  • 19
    Allens Hub et al, Submission 19, p. 2.
  • 20
    Ms Katherine Mansted, Committee Hansard, 22 June 2020, p. 20.
  • 21
    Mr Alex Stamos, Director, Stanford Internet Observatory, Committee Hansard, 22 June 2020, pp. 7-8.
  • 22
    Ms evelyn douek, Committee Hansard, 22 June 2020, p. 6.
  • 23
    Mr Alex Stamos, Stanford Internet Observatory, Committee Hansard, 22 June 2020, p. 7.
  • 24
    Facebook, Submission 27, p. 8.
  • 25
    Facebook, Submission 27, p. 8.
  • 26
    Facebook, Submission 27, p. 8.
  • 27
    Mr Nathaniel Gleicher, Facebook, Committee Hansard, 30 July 2021, p. 3.
  • 28
    Mrs Lucinda Longcroft, Director, Government Affairs and Public Policy, Australia and New Zealand, Google Australia, Committee Hansard, 30 July 2021, p. 12.
  • 29
    Ms Kara Hinesley, Director of Public Policy, Australia and New Zealand, Twitter, Committee Hansard, 30 July 2021, p. 47.
  • 30
    Ms Kara Hinesley, Twitter, Committee Hansard, 30 July 2021, p. 47.
  • 31
    Mr Lee Hunter, TikTok Australia and New Zealand, Committee Hansard, 25 September 2020, p. 18.
  • 32
    In this particular case, the beginning of a video with extreme content was spliced with innocuous content in order to avoid the automatic moderation system: see Mr Lee Hunter, TikTok Australia and New Zealand, Committee Hansard, 25 September 2020, p. 19.
  • 33
    Ms evelyn douek, Committee Hansard, 22 June 2020, p. 2.
  • 34
    Ms evelyn douek, Committee Hansard, 22 June 2020, p. 2.

 |  Contents  |