Chapter 4

Key issues

4.1
This chapter outlines how the scale, speed and relatively low cost of social media campaigns contribute to the spread of coordinated inauthentic behaviour (CIB), misinformation and disinformation online, as well as the motivations of the actors who choose to undertake these activities. In addition, the chapter considers how social media companies' business models may contribute to negative social outcomes, including the undermining of Australia's democratic institutions, a rise in social polarisation, and increasing COVID19 misinformation and disinformation.

Motivations

4.2
Misinformation, disinformation and CIB can spread across social media rapidly, and is a relatively inexpensive tool for individuals and state actors. The Department of Home Affairs submitted that:
Social media is an ideal platform for propaganda. The platforms are largely globalised and in many cases fall outside the regulatory arrangements for traditional media, broadcasting, and communications and carriage service providers. Additionally, the platforms are accessed by billions of people and are intrusive of privacy in ways that support tailored messaging. In the worst cases, the platforms can be used to promulgate 'fake news' and provocatively partisan content, undermine social cohesion and sow discontent (or at least confusion).1
4.3
The News and Media Research Centre noted that the problem was exacerbated by three factors, which made CIB a preferred course of action for those seeking to undertake foreign interference via social media:
digital networks play a central role in political communication;
the speed of social media renders information attacks hard to counter; and
digital influence operations have low implementation costs.2
4.4
Dr Jake Wallis, Senior Analyst, International Cyber Policy Centre, the Australian Strategic Policy Institute (ASPI), noted that malign actors abroad have found foreign interference through social media to be a powerful and cost-effective tool:
Access to accurate and unbiased information is a precondition for effective decision-making, yet malign actors are engaged in organised and concerted efforts to manipulate the information environment to achieve their strategic goals. Authoritarian states have identified influence operations as a cheap yet effective mechanism for influencing and weakening liberal democratic societies and regional alliances.3
4.5
Individuals who engage in spreading misinformation and disinformation online can have varied motivations. Mr Robert Size discussed some of these motivations:
…the common conception of fake news is that its publication is ideologically motivated—that those on the fringes of the political spectrum spread false information to push their own agendas, or that foreign powers spread false information to destabilise social and political systems. Undoubtedly, there is truth to this common conception. But the reality of the situation is more complicated. People have all kinds of reasons to spread false information online. Some do it in pursuit of a personal or political agenda. Some do it to shock or insult or enrage. Some do it in the belief (sometimes justified, sometimes not) that readers will interpret their content to be satire. But many, in particular those who publish real fake news (i.e. websites that masquerade as news websites), do it to turn a profit via advertising.4
4.6
Dr Jake Wallis and Mr Thomas Uren, ASPI, observed the financial motivations behind the activities that occur online and particularly on social media platforms:
Social media companies are not exchanging quality content for audience and rely instead on user generated content to attract audiences for advertising. This has resulted in changed incentives for 'news' and content producers. Online, financial incentives are linked to audience size—views, eyeballs, or clicks—and sensationalist and provocative content gathers more engagement, so content producers are de facto encouraged to produce sensationalist content, not necessarily high-quality journalism or even journalism of any sort.
The governance models and ethics that previously applied to traditional journalism have been replaced on social media; absent restraining forces, the default profit- maximising behaviour for social media platforms is to allow sensationalist, provocative content. In this social media ecosystem foreign interference and malign actors can flourish.5
4.7
Additionally, financial incentives are not necessarily discrete from political motivations. Dr Jake Wallis, ASPI, further noted that between political ideology and financial motives:
Often, when we look at fringe politically motivated actors and their behaviours on the internet, they will display similar behaviours. There is often an overlap of political ideology and financial motivation because the finances sustain the operation. These kinds of actors can build little ecosystems of websites and podcasts and then run social media channels that steer audiences off into these little ecosystems—that can be quite self-sustaining financially. So there often is an overlap.6
4.8
However, it is also clear that foreign state actors are undertaking CIB attempts online. The Department of Home Affairs noted that it regularly observes 'campaigns unfolding on social media that involve disinformation', which it subsequently refers to the host platform for removal.7 Some of these disinformation cases have been linked to foreign state actors:
In 2017, following a terrorist attack in Brighton, Melbourne, the Department identified Tweets associated with accounts that have since been publicly attributed by Twitter to a foreign government entity.
In another Australian example from 2017, accounts linked to the same foreign government entity were involved in discussions related to a plot to bomb an Etihad airlines flight departing Sydney International Airport. One account used the disrupted plot to promote and amplify the hashtags “#MuslimBan” and “#StopImportingIslam”. In this instance, hostile foreign state actors used social media to interfere in Australia’s public discourse and attempt to undermine social cohesion.8

Scale

4.9
Social media platforms provided evidence to the committee that outlined their attempts to reduce misinformation, disinformation and CIB on their platforms. Given that social media platforms are the only party with a full view of what is occurring on their platforms, this evidence is critical to understanding the scope of foreign interference and CIB attempts that are carried out on social media, as well as the spread of misinformation and disinformation.
4.10
Google, which also owns the video sharing platform YouTube, described some of the issues it faces:
Government-backed or State-sponsored groups who attempt to gain access to our user's accounts have varying goals in carrying out operations targeting Google's products: Some are looking to collect intelligence or steal intellectual property; others are targeting dissidents or activists, or attempting to engage in coordinated influence operations and disinformation campaigns. Our products are designed with robust built-in security features, like Gmail protections against phishing and Safe Browsing in Chrome, but we still dedicate significant resources to developing new tools and technology to help identify, track and stop this kind of activity as it evolves.9
4.11
Google also noted the scale of the problem, as well as how foreign state actors are engaging in CIB. Google stated that it tracks more than '270 targeted or government-backed attacker groups from more than 50 countries'.10 Based on research from its Threat Analysis Group, which releases quarterly bulletins on its findings, Google described recent coordinated influence operations that it had detected:
For the first quarter of 2020, we reported disabling influence campaigns originating from groups in Iran, Egypt, India, Serbia and Indonesia . Since March, 1 we've removed more than a thousand YouTube channels that were apparently part of a large campaign and that were behaving in a coordinated manner. These channels were mostly uploading spammy, non-political content, but a small subset posted primarily Chinese-language political content supporting Chinese Communist Party (CCP) policy and propaganda positions, similar to the findings of a recent Graphika report.11
4.12
Google further explained the motivations behind some of these activities, which can be both financially and politically driven:
These groups have different goals in carrying out their operations: while security attacks may focus on collecting intelligence or stealing intellectual property, coordinated influence operations and disinformation campaigns may be financially motivated, engaging in disinformation activities for the purpose of turning a profit; others are politically motivated, engaging in disinformation to foster specific viewpoints among a population, to exert influence over political processes, or for the sole purpose of polarising and fracturing societies.12
4.13
Similarly, in its submission, Twitter noted the challenges that the platform faced in combatting foreign interference and other CIB:
It is clear that information operations and coordinated inauthentic behavior will not cease. These types of tactics have been around for far longer than Twitter has existed. They will adapt and change as the geopolitical terrain evolves worldwide and as new technologies emerge. Given this, the threat we face requires extensive partnership and collaboration with government entities, civil society experts, and industry peers. We each possess information the other does not have, and our combined efforts are more powerful together in combating these threats.13
4.14
CIB is already occurring in Australia, with confirmed examples originating from domestic and international sources. Facebook noted that it had detected four examples of CIB occurring in Australia, including:
in March 2019, a CIB operation originating from Macedonia and Kosovo, targeting countries around the world including Australia;
in March 2019, a domestic CIB network was linked to local political actors in New South Wales;
in October 2019, a CIB network was linked to marketing firms based in the United Arab Emirates, Nigeria and Egypt that was targeting public debate primarily in the Middle East and Africa, with some focus on Australia; and
in August 2020, an English and a Chinese language CIB operation that acted primarily in English and in Chinese language criticised the Chinese government and spread misinformation about COVID-19.14
4.15
Significantly, Mr Nathaniel Gleicher, Global Head of Security Policy, Facebook highlighted that domestic actors are engaging in CIB, which in turn assists foreign powers engaged in the same behaviour:
When we look at deceptive influence operations, CIB, about half of what we see is domestic in nature. While foreign interference is an important and very serious threat, we often see—as you described—foreign actors taking narratives from domestic actors, reflecting them or amplifying them, or we see entirely domestic operations using these techniques. In the conversation today, I think it is important to continue to focus on and tackle foreign interference, but we also have to think about what happens domestically. These are real citizens, from within our countries, that are driving these narratives. That tension is one that is going to continue to be a challenge for all of us, and we are seeing sophisticated foreign actors attempt to make themselves appear domestic and attempt to wander their narratives through domestic actors who may be sympathetic to their ideas.15

The attention economy

4.16
A key element to social media platforms' business models is the revenue that is produced through the selling of advertisements. This business model, which is founded on the capitalisation of user attention in order to sell advertisements, has been dubbed the 'attention economy'.16
4.17
Responsible Technology Australia described how the financial incentive for social media platforms to retain users' attention has resulted in undesirable outcomes:
This 'attention economy' is powered by the unregulated and limitless collection of user’s personal data … Through this, the digital platforms have built intimate and detailed profiles on their users that enables them to be targeted via their interests, their vices, and their vulnerabilities. This information is then used by the platform’s algorithms to feed tailored content that is calculated to have the greatest potential of keeping users engaged and on the platform. This content has been shown to lean toward the extreme and sensational, as it is more likely to captivate user attention.17
4.18
Dr Jake Wallis and Mr Thomas Uren, ASPI, likewise argued that the business model of social media companies is encouraging the spread of low-quality content and providing an environment that is primed for foreign interference and CIB to occur:
Online, financial incentives are linked to audience size—views, eyeballs, or clicks—and sensationalist and provocative content gathers more engagement, so content producers are de facto encouraged to produce sensationalist content, not necessarily high-quality journalism or even journalism of any sort.
The governance models and ethics that previously applied to traditional journalism have been replaced on social media; absent restraining forces, the default profit-maximising behaviour for social media platforms is to allow sensationalist, provocative content. In this social media ecosystem foreign interference and malign actors can flourish.18
4.19
The data collected to fuel advertising sales also poses security risks. Ms Katherine Mansted described how the mass collection of user data, which is used to sell highly targeted advertisements, ultimately poses a security risk:
The first is that all of us already leave a huge data residue behind us in everything that we do, and it is becoming increasingly possible for actors of all kinds—from the more benign, like marketers, to political campaigns in democratic countries—to understand intimately who we are and what makes us tick. If those actors can do it, so too can malign foreign state actors.19
4.20
Indeed, the low cost of advertisements on social media platforms hold significant appeal for foreign actors, particularly given the platforms' huge audience base. Dr Michael Jensen, Associate Professor, Institute for Governance and Policy Analysis, University of Canberra, noted the relatively low cost for foreign actors to place advertisements on Facebook:
Meanwhile, we know that the Facebook content during the 2016 election reached 126 million people and it cost less than $100,000 for the ads that [Russia] put out there. Their Twitter operations cost a little over $1 million a month, but that's still not that much compared to what countries spend on intelligence operations. It scales up much faster. Additionally, they can be much more adaptive.20
4.21
Dr Carlo Kopp warned that as long as social media companies continued to utilise a business model that rewards such 'clickbait' content, social media platforms will remain susceptible to hostile influence attacks.21
4.22
Despite social media platforms' business model being entirely based on utilising users' data to sell advertisements, much of their user base does not approve of these activities. Indeed, the Attorney-General's Department noted the ACCC's finding that '83 per cent of digital platform users were of the view that it was a misuse of their personal information when entities monitor and collect their personal information without express consent'.22

Secondary markets

4.23
Aside from the selling of advertising, secondary markets have also emerged on social media platforms, some of which are being used to facilitate CIB. One method is for malign actors to buy social media accounts in order to have a credible presence for engaging in CIB. Dr Jake Wallis and Mr Thomas Uren, Senior Analyst, ASPI, outlined their research regarding the proliferation of state-backed accounts spreading content on Twitter.23 Dr Wallis noted that older accounts were being on-sold for malicious purposes:
When we looked at Twitter's takedown data in September of last year, we could see that they were tweeting about issues as diverse as Indonesian IT tech support; they were tweeting in a huge range of languages. But, when you look at that dataset over time, you can see there's a particular point at which they start tweeting in Chinese. That suggests to us that these are accounts that have been used, perhaps hired out, by these kinds of shadow actors within this shadow economy. They're hired out, they serve various PR campaigns, and then they become involved in a state sponsored campaign.24
4.24
Dr Wallis further explained that there was a 'huge market' for such accounts, as a credible history of activity on the platform made the accounts harder to detect as compared to entirely new accounts, which were more 'overtly suspicious' to Twitter's algorithms and to Twitter's site integrity team.25
4.25
Mr Nathaniel Gleicher, Facebook, described how another market had emerged, where in marketing agencies can be hired to run disinformation campaigns on behalf of actors who would not normally have the technological capability to engage in CIB:
One that I think is worth calling out is an increasing use of marketing firms or PR agencies that are essentially running disinfo for higher businesses—you hire them and they run your disinformation campaign. …we've seen more use of them lately, in two ways. First, we're seeing actors that otherwise wouldn't have the resources or the skills to run an influence operation hiring a firm to do that for them. We've seen smaller local campaigns—for example, not long ago, in the Mexican election, a number of operations linked to smaller and local campaigns run by these firms. So I think—particularly as you're thinking about your upcoming elections—being aware of this tool that could be used domestically is important.26
4.26
Mr Gleicher further noted how the use of such marketing agencies makes locating the source of a CIB campaign difficult or impossible:
…if a government or a bad actor hires a PR firm and they pay them, not on Facebook, and they don't communicate with them on our platforms, we may be able to track it back to the PR firm but we won't be able to make the connection to the actor behind it. So it's interesting to note that the late 2019 operation that I mentioned that targeted Australia was in fact linked to three separate marketing firms. We don't necessarily know who hired them, but, as the different investigative teams, in governments, in civil society and in industry, get better and better at exposing these campaigns, I think we should expect more actors to use PR firms and other intermediaries to hide their identity.27

Paid political advertising

4.27
Advertisements on some social media platforms can be purchased for political purposes, which is significant given the reach of social media platforms, particularly to groups who may not engage with traditional sources of media.28 Rules about the acceptability of political advertisements vary significantly via platform, as the Department of Home Affairs noted:
There is significant variation in each platform’s stated position of the issue of ‘disinformation’, perhaps best exemplified by differences between Facebook and Twitter’s stance on political advertising. Whereas Facebook does “not police the truthfulness” of political advertisements on its platform, Twitter has banned political advertising. Such differences also extend to definitions, policies, procedures and responses for dealing with cases of disinformation.29
4.28
Political advertising is permitted on Facebook, save when the company determines to restrict it prior to an election, as Facebook did before the 2019 Federal Election campaign.30 Facebook highlighted this temporary restriction on political advertising as an example where it had 'limited the possibility of undue foreign influence in politics',31 despite allowing such political advertising to routinely occur in non-election periods.
4.29
Facebook's paid political advertising processes have been questioned by the Law Society of New South Wales (NSW) Young Lawyers, which noted that even during this period of restricted advertising Facebook did not:
require advertisers to pass an approval process before being able to run political advertisements;
require advertisements to display a disclaimer showing the name and entity which paid for the advertisement; and
make political advertisements publicly available in an archive.32
4.30
Additionally, the Law Society of NSW Young Lawyers noted that Facebook had disabled Political Ad Collector, a tool created by ProPublica, which had allowed users to access information showing why a user was targeted by a particular political advertisement and to view political advertisements on Facebook which were not aimed at their demographic group.33 Facebook stated that it had disabled the tool because it 'did not want malicious third parties to scrape the user data harvested by the project'.34
4.31
Facebook has recently sought to reform its paid political advertising practices. As of August 2020, Facebook also applied further restrictions to political advertising on the platform. The organisation submitted:
Anyone who wants to run a political ad needs to provide identification and be authorised by Facebook prior to running the ad. And political ads are mandated to remain archived in the Ad Library for up to seven years after they have run.35
4.32
Facebook is also planning to restrict paid political advertising prior to the upcoming Federal Election. Mr Josh Machin, Head of Policy, Australia, Facebook, stated:
We've had particular requirements in place for political ads since last year; but, in the last month, we have expanded those, in preparation for an upcoming election, to also cover social issues. So that covers organisations that might be advocating on anything that's important to democracy—the economy, education, the environment, defence, whatever it might be—and they don't necessarily need to be a political candidate or a political party.
The steps that we take in relation to those ads include undertaking an authorisation process, where people are required to provide us with identification that demonstrates that they are within Australia. We require them to put disclosures on all of those ads, so the public can see who is funding them, where it's come from.36
4.33
Google, which also owns YouTube, allows for paid political advertising on its platforms. Mrs Lucinda Longcroft, Director, Government Affairs and Public Policy, Australia and New Zealand, Google Australia, described the company's activities in this area:
With regard to elections, we provide regular training for members and senators, for staff and for the political parties to ensure that they understand the use of our tools with regard to elections. ... We work, as I mentioned, with the Australian Electoral Commission to ensure that election information is accurately and widely displayed on our platforms, and we have fairly recently developed a new electoral ad transparency program within Australia which requires that persons or organisations funding ads with a political intent based in Australia are Australian citizens or nationals or have permanent residency, and the funder of any ad must be disclosed on that ad itself.37
4.34
Facebook and Google's approach to paid political advertising differs from other social media platforms. TikTok does not allow paid political advertising on its platform,38 and WeChat has likewise prohibited paid promotional content regarding:
a candidate for an election; a political party; or any elected or appointed government official appealing for votes for an election;
appeals for financial support for political purposes; and
a law, regulation or judicial outcome, including changes to any such matter.39
4.35
From October 2019, Twitter has banned paid political advertising.40 Ms Kara Hinesley, Director of Public Policy, Australia and New Zealand, Twitter, described the company's rationale behind this decision, noting in particular the view that political messages should be 'earned, not bought': 41
On 30 October 2019, our CEO announced that Twitter would stop all political advertising globally. We remain the only platform to date to implement a ban on political advertising. We believe the reach of political messages should be earned, not bought. This means bringing ads from political candidates and political parties to an end. Our approach to political advertising does not compromise free expression, because candidates and political parties can still share their content organically. This policy is focused on addressing the types of problems from paying for the reach of political speech to audiences on Twitter, which we believe has significant ramifications for democratic infrastructures online.42

Undermining of democratic institutions

4.36
The undermining of democratic institutions was raised as an issue by a number of submitters, who were concerned by the role social media was playing in the erosion of public trust in traditional institutions and in the integrity of Australia's elections. The Department of Home Affairs observed that 'social media can be a particularly effective tool in the manipulation of information'.43 The Department noted that hostile foreign actors actively use social media to promote narratives and spread disinformation, which serves these actors' strategic interests, whilst undermining democratic processes and institutions, and stifling dissenting voices.44
4.37
Further, the department warned that 'if left unchecked, foreign interference can exploit Australia's way of life and open system of government to erode our sovereignty'.45 It stated that acts of foreign interference can 'limit the Australian polity's ability to make independent judgements and can corrupt the integrity of Australia's systems'.46 The department concluded that such acts can also erode public confidence in Australia's political and government institutions and can interfere with private sector decision-making to the detriment of Australia's national security and economic prosperity.47
4.38
Other submitters agreed with this assessment. Twitter submitted that '[f]oreign interference is an unavoidable part of a globalised communications landscape'48 and that foreign actors are acting to 'exploit social tensions, amplify polarisation, and undermine trust and confidence in democratic norms, institutions, and values'.49 However, Twitter did note that, to date, it had 'not observed any foreign manipulation or foreign malicious activity related to suppression or interference with an election in Australia'.50
4.39
On the issue of long-term erosion of public trust in governments, political leadership and public institutions, Dr Michael Jensen noted that this growing distrust was also being exacerbated by a decline in the domestic political discourse:
… it's not just a matter of citizens acting poorly or social media platforms failing to police themselves. Politicians therefore need to take responsibility for protecting the information environment as well. In fact, our survey data reveals, as Kerry mentioned, that Australians are considerably more worried about misinformation spread by their own politicians than they are about malign foreign actors. …
Research in political science shows that elites play a significant role in agenda setting and setting the terms of debate for their supporters. Research also shows that one of the most powerful ways to shut down the spread of misinformation is for political leaders to denounce an otherwise politically convenient distortion or lie. But when they repeat those statements or at least admit them as legitimate issues for consideration they give them legitimacy, and that allows them to spread much further.51
4.40
Dr Bruce Arnold and Dr Benedict Sheehy, University of Canberra, noted this same issue, stating that '[t]here are recurrent indications that Australians, irrespective of economic circumstances or education, distrust politicians and are disengaging from political processes'.52 In order to combat this effect, the submitters proposed the introduction of an anti-corruption agency and increased transparency in political activities.53

The 2019 Federal Election

4.41
A number of submitters were concerned about the conduct of the 2019 Federal Election,54 despite the Joint Standing Committee on Electoral Matters' Report on the conduct of the 2019 federal election and matters related thereto finding that 'there was no foreign interference, malicious cyber-activity or security matters that affected the integrity of the 2019 Federal election'.55 The report further stated that the committee 'found limited evidence of social media manipulation within Australia, including minimal use of bots',56 although it did warn that 'given the significant rise in organised social media manipulation campaigns, we must remain vigilant'.57
4.42
The Electoral Integrity Assurance Taskforce's agencies likewise 'did not identify foreign interference nor any other interference that compromised the delivery of the 2019 Federal election or would undermine the confidence of the Australian people in the electoral process'.58
4.43
However, several submitters were still concerned about the integrity of the 2019 Federal Election. The annex of Responsible Technology Australia's submission provides examples of alleged misinformation and CIB in an Australian context, including a Queensland University of Technology study that concluded that there were a substantial number of bots that were tweeting content related to the 2019 Federal Election.59 Responsible Technology Australia submitted:
A [Queensland University of Technology] study which examined around 54,000 accounts out of more than 130,000 Twitter users active, during and after the 2019 Australian Federal Election (looking at over 1 million tweets) revealed that 13% of accounts were 'very likely' to be bots, with the majority originating from New York. This is estimated to be more than double the rate of bot accounts in the US presidential election.60
4.44
Responsible Technology Australia also cited the spread of false information on WeChat in May 2019, with reference to alleged anti-Labor and anti-Liberal propaganda.61 This included posts that criticised Australia's involvement in the Five Eyes alliance.62 Responsible Technology Australia also cited the ability of political parties and lobbying groups to spend unchecked amount of monies of social media platforms,63 and the spread of potentially inaccurate information on WeChat.64
4.45
The Law Council of Australia noted that during April and May 2019, Facebook temporarily prohibited political or electoral advertisements that were purchased from outside Australia, whereby advertisements from foreign entities that contained references to politicians, parties or election suppression, or political slogans or party logos were banned.65 The Law Council of Australia also further explained the Australian Electorate Commission's difficulties with Facebook during this period:
During 2019, it was reported that Facebook had not adequately applied the authorisation rules set out by the Electoral Act to advertising on its platform and did not respond to AEC inquiries about the source of advertising in a timely manner. Facebook's reported response was firstly that the advertising in question was not paid advertising and therefore was not required to comply with the authorisation requirements under Part XXA. Four weeks after AEC first raised the issue with Facebook, it was agreed that the Page was paying for advertisements, however by this stage the group had already been removed by the administrator.66
4.46
The Australia Institute was likewise concerned by Facebook's conduct during the 2019 Federal Election:
Because there is very little incentive or material consequences for ensuring an accurate and truthful election campaign process, platforms such as Facebook have failed to address suspicious and fraudulent political activity, including several incidents on political advertising integrity both in the lead up to and during the 2019 Federal Election.67

Social polarisation

4.47
Social polarisation is a term used to describe increasing a segregation of views within a given society. The increase of social polarisation, as facilitated by social media, was raised by several submitters as a particularly pernicious issue. In describing this effect, Dr Mathieu O'Neil, Associate Professor of Communication, News and Media Research Centre, University of Canberra, stated that 'the aim of foreign agencies is to emphasise divisions in society, to pit groups against each other'.68 The committee was given evidence from Dr Carlo Kopp that proxy groups such as activists are used in a similar fashion to how the Soviet Union employed print media and broadcast radio to spread disinformation leading to social division:
Another feature of the COVID-19 / SARS-CoV-2 "deception pandemic" is the extensive use of proxy groups, comprising both activists and supporters, to promote and propagate deceptive narratives and claims in both social and mass media. The employment of agenda driven domestic entities to cause mayhem and disruption is not a new phenomenon and arguably is an extension of the subversion techniques developed by the Soviets during the interwar period to induce regime change. Indeed, most of the foreign influence practices conducted in digital media by adversaries of Western democracies are no more than a "digital refresh" of classic Komintern and Soviet propaganda, disinformation, and subversion techniques, widely employed during the Cold War using print media and broadcast radio.69
4.48
While social polarisation has been increasing, it ought to be noted that not all social polarisation online is inauthentic or a result of CIB. Mr Lachlan Colquhoun, First Assistant Secretary, Department of the Prime Minister and Cabinet noted the difficulty of determining what was the result of state-driven activity, as opposed to organically emerging movements, and referred to the difficulty in proving which was which.70
4.49
In describing how social polarisation occurs online, Ms Katherine Mansted stated that social media platforms 'often amplify divisive and emotionally charged content'71 and that 'the business models of these particular platforms rest on attention, and, therefore, emotion sells'.72 Responsible Technology Australia likewise submitted that 'divisive, sensationalist clickbait has been shown to spread faster online, allowing foreign actors to be able to "game" this system and peddle mass amounts of content with the intention of driving polarisation'.73
4.50
In their submission, the Allens Hub for Technology, Law and Innovation; the Datafication and Automation of Human Life; and the Society on Social Implications of Technology (Allens Hub et al) described how such social polarisation can occur:
Platforms know that users tend to be more engaged with a platform when shown more extreme and controversial content. When this is built into an algorithm, it tends to drive people to content reflecting more extreme versions of their own views. … Because social media platforms in particular prioritise content generated or liked by friends, it is easy to fall into 'filter bubbles' where a user is only exposed to content that reflects their existing world-views.74
4.51
To consider one specific example, the Australian Muslim Advocacy Network (AMAN) raised the issue of rising Islamophobia in Australia, which it views as rhetoric that has been imported from the United Kingdom, Europe and United States of America to Australia via coordinated exercises on social media platforms like Facebook.75 In its submission, AMAN cited a December 2019 investigation by The Guardian Australia that had concluded that an overseas commercial enterprise was 'using its 21-page network to churn out more than 1,000 coordinated faked news posts per week to more than 1 million followers, funnelling audiences to a cluster of 10 ad-heavy websites and milking the traffic for profit'.76 The material utilised was Islamophobic in nature and vilified specific Muslim politicians.77
4.52
Dr Jake Wallis, ASPI, also discussed investigations into Islamophobia during late 2019:
Back in 2019, in the lead-in to the federal election, there were substantial audiences in Facebook groups that were managed from Kosovo, the Republic of North Macedonia and Albania that were being fed an ongoing selection of Islamophobic content. That was driving engagement, which was reasonably sustained. I think we had audiences of about 130,000 across those Facebook groups. The aim for those actors is to drive engagement and then they will provide links within those groups, when they have that substantial audience, and they will steer those audiences to content farms, websites, outside of Facebook that will serve advertising. The serving of advertising generates revenue for the individuals behind the websites.78
4.53
Dr Wallis further noted that these actors may not be solely financially motivated and that there was 'often an overlap of political ideology and financial motivation',79 as the financial benefit of their activities allowed them to further their ideologically-based activities.80
4.54
Responsibly Technology Australia also discussed the rise in Islamophobic content, noting that a network of Facebook pages that were run out of the Balkans had profited from the manipulation of Australian public sentiment, by using 'hot button issues such as Islam, refugees and political correctness, [to drive] clicks to stolen articles in order to earn revenue from Facebook's ad network'.81 Responsibly Technology Australia specifically identified the 'Australians against Sharia' page, which has over 67,000 members, as one of the most prominent pages in the network.82
4.55
In response to the rising issue of social polarisation, Dr Richard Johnson, First Assistant Secretary, Social Cohesion Division, Department of Home Affairs, described how the Department of Home Affairs was attempting to 'strengthen Australia's social cohesion' by promoting the uptake of citizenship and democratic values and attempting to increase community resilience.83
4.56
In contrast to other submitters, Mr Nathaniel Gleicher, Facebook, downplayed the polarising role of social media:
… the research on the impact that social media has on polarisation is actually much more uncertain than we often say. There are studies suggesting that it can help reduce polarisation and there are studies suggesting that it can help contribute to it. There are also a lot of studies saying it is driven by what's happening in broadcasting and other spaces. So one of the keys in trying to tackle this is just to recognise that the threat actors are targeting the entire media ecosystem—broadcast, social media and public debate.84

Social media as a source of news

4.57
Social media is increasingly a source of news for many Australians, despite the lack of regulation and fact-checking of content on the platforms. An issue raised by several submitters was the tendency of global populations, including Australians, to access news via social media platforms, as opposed to more traditional news sources, such as newspapers, television and radio.
4.58
The University of Canberra's 2021 Digital news report noted that '[t]he decline in TV and print news is emblematic of the gradual shift away from traditional news platforms towards online and social media news sources'.85 Mr Robert McKinnon, Assistant Secretary, Department of Foreign Affairs and Trade (DFAT), also raised that, for some countries within Australia's region, Facebook is the primary vehicle for news,86 as opposed to more traditional sources of news media.
4.59
Professor Kerry McCallum, Director, News and Media Research Centre, University of Canberra, likewise noted the increasing role of social media as a news source, especially for Australians who already have low news literacy:
I can highlight three interconnected findings from our research. Firstly, Australians are increasingly relying on social media for their news. While television is still the main source of news for most people, 52 per cent of us use social media as a general source of news and 21 per cent use it for their main source of news; 39 per cent of us use Facebook to access news. Secondly, Australians have low levels of news literacy compared to other countries. In our 2018 report; we found that 58 per cent of Australians have low news literacy. Significantly, 76 per cent of those who rely on social media for news have low news literacy.87
4.60
Dr Bruce Arnold and Dr Benedict Sheehy, University of Canberra, highlighted that 'social media platforms are now significant mechanisms for the dissemination of information in Australia about political parties and individual politicians, public policy issues and local/international events'.88 Dr Arnold and Dr Sheehy considered that the significance of social media platforms in this area has four bases:
the social nature: they are more trusted than 'official' sources;
the replacement of mainstream media;
weak regulation and a lack of editorial control; and
the validation social media platforms can provide to users.89
4.61
Ms evelyn douek noted that some overseas disinformation campaigns seek to 'sow the idea of distrust and apathy about public discourse more generally'.90 Similarly, Dr Michael Jensen observed that the decline in trust in news in Australia had resulted in negative consequences:
… trust in news is critical to a functioning democracy—but trust in Australia's news has been declining. … foreign actors can exploit declining trust in professional journalism in order to polarise society and undermine Australia's capacity for self-governance. News sources play a central role in the construction of a citizen's information environment, and social media are increasingly important sources through which persons are accessing information.91
4.62
Dr Jensen further explained how the spread of misinformation and erosion of trust in traditional news sources can have serious impacts on public life:
News media fragmentation and political polarisation go hand in hand to create unique vulnerabilities that foreign actors can use to manipulate the public as well as our political authorities. Fragmentation and publicly available sets of information and contextual understandings in political life open fissures in the information environment that foreign actors can exploit and use against us. Without a common set of facts, it's hard to move people to a common set of ends.92
4.63
Professor Kerry McCallum, University of Canberra, noted the need for 'a concerted effort to improve media literacy across the Australian constituency',93 as did the Australia Institute.94 Dr Jake Wallis and Mr Thomas Uren, ASPI, likewise raised the importance of traditional media sources in this context:
The issues that malign actors use to drive division, to influence and manipulate audiences at scale may not even be overtly political. As hierarchical models of information distribution (from government, from national broadcasters, from mainstream media) are replaced by a proliferation of information flows, trusted networks become increasingly important as sources of reliable content.95
4.64
This state of affairs may be improving. The University of Canberra's 2021 Digital news report found that, over the past 12 months, trust in news had risen in Australia to 43 per cent, a five per cent improvement of figures from the previous year. The report observed that '[t]he improvement in trust likely reflects the public's greater reliance on news in a crisis, and the active dissemination of official health advice by news outlets during the pandemic.'96
4.65
However, the 2021 Digital news report also found that concern about false and misleading information in Australia is high (64 per cent) and higher than the global average (56 per cent). Fifty-nine per cent of Australians reported that they had encountered misinformation in the week prior to the survey being undertaken. The report expanded on this finding:
Across a range of topics, experience of misinformation about COVID-19 was the highest (38%) followed by politics (33%) and climate change (29%). Nearly a quarter of participants did not know if they had come across misinformation (23%). This figure is higher among those with low education (30%), indicating a lack of awareness or ability to identify misinformation.97
4.66
In contrast, some submitters expressed concern that governments may impose arbitrary regulations to address the issue of fake news, which they viewed as eroding civil liberties and free speech.98 For example, the Australian Citizens Party submitted that the 'risk posed to Australia's democracy by foreign interference through social media' is minimal, and does not warrant the 'extreme policy responses' that have been proposed to mitigate it, either in Australia or internationally.99

Targeting of journalists

4.67
Poor news practices on social media platforms are having impacts on journalists, who may be impersonated or deceived on social media platforms. Google observed that, since the beginning of 2020, a rising number of attackers had been impersonating news outlets and journalists, seeking to seed false stories with other reporters to spread disinformation.100 Foreign Policy experts were also being targeted by attackers seeking to access their research, organisations they work for, and fellow researchers.101
4.68
Mr Nathaniel Gleicher, Facebook, described how legitimate journalists can be deceived into writing inauthentic content:
One of the new techniques we see increasingly is that actors, particularly those linked to Russia and Iran, are directly reaching out to reporters around the world, trying to trick them into writing the stories for them, the idea being that, if you can get a reporter to write your false narrative, you already get a whole bunch of public awareness. We've seen this be successful in the United States.102
4.69
Mr Gleicher further outlined a concerning situation where journalists were being hired by false media organisations:
… there are people who are, unfortunately, tricked into working for one of those campaigns. We've seen Russian actors run false media organisations. One of the most prominent examples is that they ran two false media organisations targeting, among other places, the United States, one aimed at the far Right and the other aimed at the far Left. They hired local reporters or freelancers who didn't know any better to write for them, trying to make their voices appear more authentic and trying to have more impact.103

Transparency

4.70
Currently, social media platforms determine what materials are publicly released regarding the content on their platforms and the activities they undertake, including content moderation policies, use of user data and the nature of the algorithms used. Witnesses have noted this lack of transparency has created difficulties in discerning the internal functioning of social media companies and whether their responses to foreign interference, misinformation, disinformation and CIB occurring on their platforms have been adequate.104
4.71
Ms evelyn douek noted that social media platforms have been reluctant to release information regarding content moderation and the removal of disinformation and misinformation:
How successful have platforms been in adopting these [takedown] policies? We simply don't know. The data that platforms have released is relatively sparse. Some of the figures that platforms have released about takedowns sound impressive, but it's important to emphasise we don't know what these truly represent as a percentage of misinformation on these platforms, nor do we know how much unproblematic content was swept up in these removals. The only thing we know for certain, from experience, is that there will have been some measure of both over- and under-enforcement of misinformation courses.105
4.72
AMAN noted that due to a lack of transparency requirements in Australia, 'it is not possible to ask Facebook to reveal if a Page is run by who it claims to be run by, even if it is in public interest or important to running a civil remedy claim'.106
4.73
Many submitters raised the necessity of transparency and the need for more information to be released to the public. Dr Jake Wallis and Mr Thomas Uren, ASPI, submitted that social media companies should be required to make their content moderation policies and enforcement actions transparent, including publicly releasing content moderation guidelines and regular transparency reports.107 Dr Bruce Arnold and Dr Benedict Sheehy, University of Canberra, likewise recommended the institution of a transparency regime,108 as did Responsible Technology Australia.109
4.74
The Law Council of Australia stated that increased transparency would need to be 'proportionate, reasonably appropriate and adapted to address the legitimate threat of disinformation and foreign interference'110 and outlined materials that, in its view, ought to be released publicly:
Requirements, legislative or otherwise, which would mandate social media platforms to establish public databases of political advertisements, which present data such as the amount spent on the advertisements, and by whom the money was spent, and targeting parameters, would be an important and significant step forward.111
4.75
Social media companies have voluntarily undertaken some transparency measures, despite the lack of any formal requirement that they do so. Facebook submitted that it undertakes quarterly reporting on Community Standards Enforcement and monthly reporting on the removal of coordinated inauthentic behaviour.112 It has also established an Ad Library, which catalogues 'all active ads any Page is running, along with more Page information such as creation date, name changes, Page merges and the primary country of people who manage Pages with large audiences'.113
4.76
TikTok noted the public availability of its terms of service, privacy policy, and community guidelines, which it uses to provide a 'safe and friendly environment' for users.114 Google also publishes transparency reports, including one on misinformation and disinformation.115
4.77
Twitter noted that its activities in this area include:
retrospective reviews of elections;116
establishing an Ads Transparency Centre, which ' includes a repository of all advertisements served on Twitter within the last seven calendar days, as well as all of the political campaign ads paid for by certified political advertisers in Australia';117 and
a bi-annual Transparency Report.118
4.78
In 2018, Twitter also commenced archiving Tweets and media connected to potentially-state backed operations, which are publicly released:
In line with our strong principles of transparency and with the goal of improving understanding of foreign influence and information campaigns, we release archives of Tweets and media associated with potential information operations that we had found on our service, including the 3,613 accounts we believe were associated with the activities of the Internet Research Agency on Twitter dating back to 2009. We made this data available with the goal of encouraging open research and investigation of these behaviors from researchers and academics around the world.119
4.79
In its supplementary submission, Twitter noted that this archive is 'the only public archive of its kind'120 and now spans operations across 15 countries, including more than nine terabytes of media and 200 million Tweets.121 Twitter further stated that these datasets have been used by 'thousands of researchers' to 'conduct their own investigations and share their insights and independent analyses with the world'.122

COVID-19 misinformation and disinformation: a case study

4.80
The COVID-19 pandemic has highlighted the spread of online misinformation, which has resulted in real world impacts and created significant challenges for governments managing the pandemic. Dr Jake Wallis, ASPI, summarised the current domestic and international situation in terms of COVID-19 misinformation and disinformation:
The pandemic has created a perfect storm of information manipulation, with these state and non-state actors echoing each other's theories, tactics and techniques. COVID-19 has spread globally and has been used to prey upon people's fears and to drive engagement for malign purposes, be it to scam people, to drive conspiracy theories, for extremists to recruit, radicalise or proselytise, or simply by states trying to gain advantage. This activity can mobilise offline. It can drive real-world harms. For example, the conflation of 5G conspiracy theories has resulted in attacks on cell towers. It's worth noting that throughout 2019 the Kremlin-funded RT advanced the theory that 5G causes harm. We also note last week's warnings from ASIO on the rise of right-wing extremism driven by COVID-19.123
4.81
In the Australian content, Dr Michael Jensen described some examples of the misinformation that is occurring and the real-world impacts it is leading to:
We need look no further than the anti-mask and lockdown protests in Melbourne that we've seen recently to see how misinformation can mobilise people in ways that undermine our capacity to govern. False information downplaying the risks of the COVID virus, questions about the health value of lockdowns and face masks and conspiracies that politicise these issues in a fight against a conspiracy to deny basic rights—these kinds of communications are easy to find online, and once they become a premise for behaviour they undermine our capacity to contain the COVID-19 pandemic.124
4.82
Mr Alex Stamos, Director, Stanford Internet Observatory, described some of the common themes he had seen in COVID-19 misinformation and disinformation in the United States context:
Here in the United States, a lot of the COVID disinformation has been domestic. A lot of it has been from people who are motivated around antivax, who believe that everything's a conspiracy—Bill Gates, despite having more money than God, has got some kind of plan to use vaccines to get more money; I'm not exactly sure what their theory is. One of the most effective pieces of domestic disinformation in the United States was a video called Plandemic, which seems to be financially motivated—grifters who are pushing it to sell stuff. That has been very effective, much more effective than any foreign campaign that we've seen.125
4.83
Ms Katherine Mansted likewise noted that COVID-19 was acting as a misinformation accelerant:
…it is my opinion that COVID-19 has very much been an accelerant and will continue to be an accelerant for propaganda and disinformation. There is a significant body of cognitive research which demonstrates that people are more susceptible to propaganda and disinformation in times of high anxiety and uncertainty. COVID-19—not just the acute health crisis but also the economic and social consequences that will continue to flow for some time—will create fertile ground for disinformation and propaganda. This is something that we've seen a number of actors take advantage of, such as state actors, peddling disinformation and trying to enhance social polarisation, and extremist groups.126
4.84
Dr Carlo Kopp also noted that the example of COVID-19 misinformation was illustrative of how quickly and widely misinformation can spread online:
The chaos and disruption observed as a result of the failure to effectively contain misinformation, disinformation and malinformation in social media, and mass media, during the current COVID19 / SARSCoV2 pandemic, provides a good indication of the damage foreign nation state actors can inflict with a very modest investment.127
4.85
However, much COVID-19 misinformation in Australia seems to be limited to English-language sources. Dr Richard Johnson, Department of Home Affairs, explained how, while there was misinformation spreading, it did not seem to be specifically targeted at culturally and linguistically diverse communities:
We haven't identified a sustained campaign. Certainly, there is a lot of misinformation circulating in the environment about COVID and COVID-related issues. We haven't identified a particular campaign [in languages other than English], but there is a lot of interaction between us and communities, in terms of providing them with trusted sources of information. In particular, we are referring them back to the Department of Health and australia.gov.au and making sure that, where it's needed, that trusted source of information is also translated for communities so that they can understand what the facts are.128
4.86
Additionally, Mr Alex Stamos, Director, Stanford Internet Observatory, noted that the COVID-19 pandemic and the misinformation environment proliferating around it had created political opportunities for opportunistic governments:
What we've seen from an online propaganda perspective is a bifurcation between online propaganda that is about supporting the actions of a government and then using COVID as a tool to tear down other governments, often as part of programs or a geopolitical strategy that has existed for years. So they are opportunistically grabbing onto COVID to execute on geopolitical goals that have always existed.129
4.87
However, it is not only government actors taking advance of the misinformation and disinformation surrounding COVID-19. Dr Jake Wallis, ASPI, noted that a large range of actors were utilising COVID-19 misinformation and disinformation, whether for financial, political or geostrategic motivations,130 which may account for its vast proliferation.
4.88
Lastly, Dr Richard Johnson, Department of Home Affairs, noted that extremists were utilising the COVID-19 pandemic to amplify their message:
Some of the extremist milieu are using the pandemic, and also the response to the pandemic, to amplify some of their key narratives. In the extremist context, we have certainly seen instances where COVID related narratives, including misinformation, are playing out.131

Scale

4.89
The ability of social media platforms to detect COVID-19 disinformation and their willingness to publicly release such information are crucial for assessing the scale of the issue.
4.90
Regarding the scope of the problem, Dr Carlo Kopp noted that Australia, as with comparable nations abroad, was experiencing a 'deluge' of online misinformation relating to COVID-19.132 Dr Kopp stated:
The scope and scale of the misinformation, disinformation and malinformation being distributed globally during the COVID-­‐19 / SARS-­‐CoV-­‐2 pandemic is unprecedented and without any doubt dwarfs the two previous benchmarks, the UK Brexit vote and the US 2016 Presidential Election. In part this reflects the global footprint of the viral pandemic, and in part it reflects nation state players concurrently targeting domestic and foreign audiences.133
4.91
Ms evelyn douek noted that, while each major platform has 'rolled out more aggressive misinformation policies specifically related to COVID-19',134 this had not been without difficulties:
In terms of broader lessons from this approach, while reception of these measures has been broadly positive, even in the relatively more scientific realm of health information and where evidence of harms should be easier to determine, defining misinformation has been difficult in some cases. Authoritative sources have sometimes conflicted with each other and at other times simply got things wrong, which is to be expected in a rapidly evolving situation of high uncertainty. This does, however, suggest caution in overlearning the lessons of the pandemic in terms of policing this information with a heavy hand across all categories of content.135
4.92
Facebook reported that, between April and June 2020 alone, it had removed over 7 million posts for spreading harmful misinformation about COVID-19, which included 'harmful claims, like drinking bleach cures the virus or that COVID-19 was caused by 5G'.136 By 30 July 2021, Mr Josh Machin, Facebook, reported that Facebook had removed 18 million posts that included harmful misinformation relating to COVID-19, as well as fact-checking 167 million posts and labelling them as false information.137 Mr Machin also noted that in 2020 Facebook removed 110,000 pieces of harmful COVID-19 misinformation that originated in Australia.138
4.93
In its July 2020 supplementary submission, Twitter stated that it had 'removed thousands of Tweets around the globe for containing misleading and potentially harmful content' and that its automated systems 'have challenged approximately 4.3 million accounts which were targeting discussions around COVID-19 with spammy or manipulative behaviors'.139
4.94
The Australia Institute reported that, in a 2020 investigation conducted with the Queensland University of Technology, it had found evidence of 'coordinated COVID-19 misinformation and disinformation on Twitter, for either commercial or political purposes'.140 The Australia Institute noted:
In some ways, the spread of COVID-19 disinformation mimics the outbreak of the virus itself, with the disinformation amplified and given authenticity by the wider fringe community that spreads it after it has been introduced by the inauthentic actors. 141
4.95
In Google's July 2020 submission to the inquiry, it reported that it had 'detected 18 million malware and phishing Gmail messages per day related to COVID-19, in addition to more than 240 million COVID-related daily spam messages'.142 Furthermore, Google noted that some of these international attacks were backed by foreign actors:
Google’s [Threat Analysis Group] has specifically identified over a dozen government-backed attacker groups using COVID-19 themes as lure for phishing and malware attempts—trying to get their targets to click malicious links and download files, including in Australia 143
4.96
On 30 July 2021, Mr Richard Salgado, Director, Law Enforcement and Information Security, Google, noted that the amount of COVID-19 misinformation and disinformation was 'voluminous' and included 'a lot of criminal activity'.144 Mrs Lucinda Longcroft, Director, Government Affairs and Public Policy, Australia and New Zealand, Google Australia, noted that, since the beginning of 2020, the Google has 'have removed around 800,000 videos from YouTube and over 275 million COVID apps from across our platforms'.145

Social media platforms' responses to COVID-19 misinformation and disinformation

4.97
Social media platforms are attempting to combat the rapid proliferation of COVID-19 misinformation and disinformation on their platforms, with varying levels of success. Many of the platforms described what activities they had been undertaking to the committee.
4.98
Ms Kara Hinesley, Twitter, described how Twitter has prioritised tackling misinformation 'based on the highest potential for harm', which included a particular focus on COVID-19 misinformation and disinformation.146 Twitter has undertaken a number of activities in this area, including:
a global expansion of the COVID-19 search prompt, which ensured that people on the platform 'are met with credible, authoritative content at the top of their search experience';147
a curated event feature for COVID-19, which contained credible information and the latest facts about COVID-19;148
providing a dataset to developers and researchers regarding the public conversation about COVID-19 in real-time;149
increasing machine learning and automated moderation to take action on potentially abusive and manipulative content;150
building systems that enable its team to continue to enforce its rules remotely around the world;151
instituting a global content severity triage system to prioritise potential rule violations that ' present the biggest risk of harm';152
executing daily quality assurance checks on content enforcement processes;153
engaging with Twitter's partners around the world;154
reviewing Twitter's rules in the context of COVID-19;155
broadening its definition of harm to 'address content that goes directly against guidance from authoritative sources of global and local public health information';156 and
new policies regarding the use of COVID-19 in advertisements.157
4.99
Google has likewise increased its removal of COVID-19 misinformation and disinformation, as well as launching a '3 million COVID vaccine counter misinformation fund and 'launch[ing] over 200 new products, with about a $1 billion investment, in countering COVID misinformation'.158 Mrs Lucinda Longcroft, Google Australia, also confirmed that Google had provided '$5 million worth of free advertising to the Australian government, which has resulted in 20.6 million impressions of authoritative information for Australian users'.159
4.100
TikTok reported that it had also undertaken activities in this area:
…we introduced a global misinformation strategy, which included updates to our policies and rolling out new in-app features to provide more context on COVID-19 and help combat against misleading medical information online. We are working hard to minimise the opportunity for disinformation to gain traction on TikTok, and we are working with public health organisations (like World Health Organisation, International Federation of Red Cross, and popular voices for public health and science, like Bill Nye the Science Guy) to provide trusted information to our community.160
4.101
TikTok also confirmed that it removes false medical advice about COVID-19, as well as 'false information that is likely to stoke panic and consequently result in real world harm', conspiracy theories, and hate speech.161 It also introduced a feature within the application that connects users to authoritative sources of health information.162
4.102
WeChat has also prohibited 'content which may constitute a genuine risk of harm or direct threat to public safety'.163 WeChat noted in its submission, as an example, that it has prohibited 'the advertising and sale of COVID-19 home testing kits'.164
4.103
Facebook has also undertaken activities in this area, including:
removing misinformation that 'violates our Community Standards and can cause imminent, physical harm';165
reducing the number of people who see content that 'does not violate Community Standards, but still undermines the authenticity and integrity of the platform';166
third-party fact-checking 'to review and rate the accuracy of posts on Facebook and Instagram' and applying warning labels once posts have been identified as false, as well as reducing the distribution of false content;167
displaying prompts to direct users to official sources of information, including from the Australian government and the World Health Organization;168
launching a Coronavirus Information Centre that 'provides a centralised hub of latest updates';169
donating advertising credits to the Australian Federal government and state governments' advertising campaigns;170
introducing a chatbot on WhatsApp to help individual to access the latest information;171 and
providing additional context about information they share on COVID-19 via a new notification that appears when people are about to share COVID-19 related links.172
4.104
Facebook also removes groups and pages that attempt to spread vaccine misinformation and disinformation, as well as reducing highly forwarded messages until they have been fact-checked.173 Facebook stated that it is 'rejecting ads and fundraisers that include anti-vaccination misinformation once we find them'.174 However, Facebook also noted its own limitations in this area:
Combatting misinformation is a highly challenging and adversarial space, so we still miss things and won't catch everything – but we’re making progress.175

The Australian government's response to COVID-19 misinformation and disinformation

4.105
The government's response to COVID-19 misinformation and disinformation has developed over time, with more resources and activities being developed as the pandemic progressed throughout late 2020 and early 2021. The Special Broadcasting Service (SBS), since the early stages of the COVID-19 pandemic, has provided news coverage in languages other than English. SBS outlined its activities in this area:
Comprehensive coverage of reliable health information across all of its 68 language services, including in relation to government issued announcements, quarantine recommendations, evacuations, travel plans, and education advisories;
Very extensive in language coverage, in particular for Mandarin and Cantonese speaking communities, encompassing community impacts, talk back and interview programs, explainers, dispelling misinformation, and community impacts;
In addition to the features and interviews, SBS’s Mandarin program has also hosted talkback programs, giving listeners the opportunity to speak with public health expert Dr Zhang Ying about any concerns on this issue; and
As the situation progresses, SBS’s extensive coverage has continued to support and service the Persian-speaking and Italian-speaking communities, among others.176
4.106
However, as late as 11 December 2020, there was still not a single body dedicated to combatting COVID-19 misinformation and disinformation. Mr Lachlan Colquhoun, Department of the Prime Minister and Cabinet, confirmed that, to his knowledge, there was no COVID-19 disinformation taskforce, stating that he had 'not heard of one'.177 Mr Neil Hawkins, Acting Deputy Coordinator and Acting First Assistant Secretary, National Counter Foreign Interference Coordination Centre, Department of Home Affairs, stated that 'there was a COVID taskforce, of which disinformation was a part, but I don't think there was a disinformation taskforce'.178
4.107
Dr Richard Johnson, Department of Home Affairs, explained that the Department of Home Affairs has been increasing its activities in relation to disinformation, utilising and expanding on its pre-existing capabilities:
… we've been working in the online space in the [countering violent extremism] context for a number of years. … During the COVID period, we built up that capability, particularly around the issue of misinformation in the context of the government's handling of the COVID pandemic. We built a cross-agency team which brought in a number of relevant actors—including from the Department of Health … we bolstered our capability, which did involve looking for instances of misinformation online—particularly, but not just, identifying it and then referring it to social media companies for action.179
4.108
Dr Johnson further explained that the Department of Home Affairs has also been working with the Department of Health to release fact sheets:
We did a fact sheet which addressed some of the key themes around misinformation in a COVID context, and we also did a lot of work to publish and translate those materials, noting that Australia is one of the most successful multicultural countries in the world. We set up a kind of translation function to ensure that, in up to 63 languages, we were getting official information out to communities about COVID.180
4.109
Additionally, Mr Robert McKinnon, DFAT, also stated that DFAT was undertaking a six-month pilot program, which included a counter disinformation unit.181 Mr McKinnon confirmed that DFAT had reported on the work of the pilot to the National Security Committee of cabinet, despite the monitoring and framework for the pilot being 'not settled yet':182
We're in the process of establishing a quite detailed monitoring and evaluation framework. That is not settled yet, but, as you would have heard from that hearing, we certainly did have a framework, in terms of our goals and objectives, and we've reported against those goals and objectives.183
4.110
By 30 July 2021, the Department of Home Affairs had undertaken further work in this area. Dr Richard Johnson, Department of Home Affairs, described how the Department of Home Affairs' activities had continued on throughout the COVID-19 pandemic:
We continue to support whole-of-government efforts in the COVID information environment, working very closely with the Department of Health, to put out factual information about the pandemic and to translate that information into up to 63 languages. We also continue to refer instances of misinformation to the social media platforms. Since the start of this year, to the end of June, we have referred 1,735 instances of COVID related misinformation to platforms. We do refer them to the platforms of course. They adjudicate what to do if the particular instance is against their own terms of service.184
4.111
Dr Johnson stated that there are 'about seven' full-time equivalent staff within the Department of Home Affairs who are 'identifying and referring instances of COVID related misinformation',185 and accordingly referring such material to social media platforms for action.186 Dr Johnson described what kinds of misinformation the staff are seeking to identify:
We tend to look for misinformation or disinformation in the COVID context under three broad categories. The first is whether, prima facie, it could endanger people's lives—for example, if you were to take X to prevent you from getting Y, where the X in question is something that could be seriously dangerous to an individual's health. There is also information that would seriously compromise the national COVID response. That could come in a range of forms—for example, people putting out disinformation that said, 'Don't get vaccinated, because there's a chip that will link you to 5G networks.' Where we think the COVID environment is being used to vilify members of our community, we would also refer that. Those are the three broad categories that we use in terms of referrals. It is then up to the companies we refer it to to adjudicate it against their own terms of service.187

The Joint Committee on Law Enforcement

4.112
The Joint Committee on Law Enforcement released a report in August 2021, entitled Vaccine related fraud and security risks. The report ultimately concluded that COVID-19 disinformation and misinformation pose a significant threat:
It has also become clear that COVID-19 mis/disinformation is not only leading to vaccine hesitancy, a health policy concern, but is also leading to some instances of civil disobedience and protest. COVID-19 mis/disinformation is therefore also a law enforcement issue of growing concern, particularly as individuals and groups become more radicalised.188
4.113
The Joint Committee on Law Enforcement further noted that '[a]ll Australians, law enforcement agencies and governments must work together to ensure that when the pandemic is over, Australia is not left with the infectious disease of disinformation being used for fraudulent purposes, spreading fear and distrust of our necessary institutions'.189 The report subsequently recommended that:
…the Department of Home Affairs ensure that the spread of COVID-19 misinformation and disinformation is monitored for extremist content and links to international extremist groups, as well as undertake greater efforts to counter misinformation and disinformation, especially among Aboriginal and Torres Strait Islander communities and culturally and linguistically diverse communities.190

  • 1
    Department of Home Affairs, Submission 16, p. 4.
  • 2
    News and Media Research Centre, Submission 8, p. pp. 2-3.
  • 3
    Dr Jake Wallis, Senior Analyst, International Cyber Policy Centre, Australian Strategic Policy Institute (ASPI), Committee Hansard, 22 June 2020, p. 10. See also Dr Carlo Kopp, Committee Hansard, 22 June 2020, p. 35.
  • 4
    Mr Robert Size, Submission 3, p. 2.
  • 5
    Dr Jake Wallis and Mr Thomas Uren, ASPI, Submission 2, p. 5.
  • 6
    Dr Jake Wallis, ASPI, Committee Hansard, 22 June 2020, p. 13. See also Ms Katherine Mansted, Committee Hansard, 22 June 2020, p. 18.
  • 7
    Department of Home Affairs, Submission 16, p. 7.
  • 8
    Department of Home Affairs, Submission 16, p. 7.
  • 9
    Google Australia, Submission 23, p. 2.
  • 10
    Google Australia, Submission 23, p. 2.
  • 11
    Google Australia, Submission 23, p. 2.
  • 12
    Google Australia, Submission 23, pp. 2-3.
  • 13
    Twitter, Submission 20, p. 6.
  • 14
    Mr Nathaniel Gleicher, Global Head of Security Policy, Facebook, Committee Hansard, 30 July 2021, p. 3.
  • 15
    Mr Nathaniel Gleicher, Facebook, Committee Hansard, 30 July 2021, p. 3.
  • 16
    Responsible Technology Australia, Submission 17, p. 2.
  • 17
    Responsible Technology Australia, Submission 17, p. 2.
  • 18
    Jake Wallis and Mr Thomas Uren, ASPI, Submission 2, p. 5.
  • 19
    Ms Katherine Mansted, Committee Hansard, 22 July 2020, p. 20.
  • 20
    Dr Michael Jensen, Associate Professor, Institute for Governance and Policy Analysis, University of Canberra, Committee Hansard, 22 June 2020, p. 27.
  • 21
    Dr Carlo Kopp, Submission 21, p. 10.
  • 22
    Attorney-General's Department, Submission 13, p. 6.
  • 23
    Dr Jake Wallis and Mr Thomas Uren, Senior Analyst, International Cyber Policy Centre, ASPI Committee Hansard, 22 June 2020, pp. 11-12.
  • 24
    Dr Jake Wallis, ASPI, Committee Hansard, 22 June 2020, p. 11.
  • 25
    Dr Jake Wallis, ASPI, Committee Hansard, 22 June 2020, p. 11.
  • 26
    Mr Nathaniel Gleicher, Facebook, Committee Hansard, 30 July 2021, p. 3.
  • 27
    Mr Nathaniel Gleicher, Facebook, Committee Hansard, 30 July 2021, p. 3.
  • 28
    Professor Kerry McCallum, Director, News and Media Research Centre, University of Canberra, Committee Hansard, 25 September 2020, p. 1.
  • 29
    Department of Home Affairs, Submission 16, p. 7.
  • 30
    Law Society of New South Wales (NSW) Young Lawyers, Submission 11, p. 7
  • 31
    Facebook, Submission 27, pp. 17-18.
  • 32
    Law Society of NSW Young Lawyers, Submission 11, pp. 7-8.
  • 33
    Law Society of NSW Young Lawyers, Submission 11, p. 8.
  • 34
    Law Society of NSW Young Lawyers, Submission 11, p. 8.
  • 35
    Facebook, Submission 27, p. 17.
  • 36
    Mr Josh Machin, Head of Policy, Australia, Facebook, Committee Hansard, 30 July 2021, p. 8.
  • 37
    Ms Lucinda Longcroft, Director, Government Affairs and Public Policy, Australia and New Zealand, Google Australia, Committee Hansard, 30 July 2021, p. 13.
  • 38
    TikTok, Submission 26, p. 4.
  • 39
    WeChat, Submission 30, [p. 3].
  • 40
    Twitter, Submission 20.1, p. 4.
  • 41
    Ms Kara Hinesley, Director of Public Policy, Australia and New Zealand, Twitter, Committee Hansard, 30 July 2021, p. 48.
  • 42
    Ms Kara Hinesley, Twitter, Committee Hansard, 30 July 2021, p. 48.
  • 43
    Department of Home Affairs, Submission 16, p. 3.
  • 44
    Department of Home Affairs, Submission 16, p. 3.
  • 45
    Department of Home Affairs, Submission 16, p. 3.
  • 46
    Department of Home Affairs, Submission 16, p. 3.
  • 47
    Department of Home Affairs, Submission 16, p. 3.
  • 48
    Twitter, Submission 20, p. 8.
  • 49
    Twitter, Submission 20, p. 8.
  • 50
    Ms Kara Hinesley, Twitter, Committee Hansard, 30 July 2021, p. 48.
  • 51
    Dr Michael Jensen, Committee Hansard, 25 September 2020, p. 3.
  • 52
    Dr Bruce Arnold and Dr Benedict Sheehy, University of Canberra, Submission 7, [p. 9].
  • 53
    Dr Bruce Arnold and Dr Benedict Sheehy, University of Canberra, Submission 7, [p. 9].
  • 54
    Dr Bruce Arnold and Dr Benedict Sheehy, University of Canberra, Submission 7, [p. 5]; Law Society of NSW Young Lawyers, Submission 11, p. 4; Responsible Technology Australia, Submission 17, pp. 9-14; Law Council of Australia, Submission 18, p. 20; and Australia Institute, Submission 31.1, p. 10.
  • 55
    Joint Standing Committee on Electoral Matters (JSCEM), Report on the conduct of the 2019 federal election and matters related thereto, December 2020, p. 112. This finding was affirmed by Mr Neil Hawkins: Mr Neil Hawkins, Acting Deputy Coordinator and Acting First Assistant Secretary, National Counter Foreign Interference Coordination Centre, Department of Home Affairs, Committee Hansard, 11 December 2020, p. 6.
  • 56
    JSCEM, Report on the conduct of the 2019 federal election and matters related thereto, December 2020, p. 112.
  • 57
    JSCEM, Report on the conduct of the 2019 federal election and matters related thereto, December 2020, p. 112.
  • 58
    Department of Home Affairs, Submission 16, p. 6.
  • 59
    Responsible Technology Australia, Submission 17, pp. 9-14 (Australian examples) and pp. 15-17 (international examples). Law Council of Australia, Submission 18, pp. 8-10 also provides international examples of foreign interference via social media.
  • 60
    Responsible Technology Australia, Submission 17, p. 9.
  • 61
    Responsible Technology Australia, Submission 17, pp. 9-10 and pp. 12-13.
  • 62
    Responsible Technology Australia, Submission 17, p. 13.
  • 63
    Responsible Technology Australia, Submission 17, p. 11.
  • 64
    Responsible Technology Australia, Submission 17, pp. 11-13.
  • 65
    Law Council of Australia, Submission 18, p. 20 and p. 23. See also Responsible Technology Australia, Submission 17, p. 10.
  • 66
    Law Council of Australia, Submission 18, p. 32.
  • 67
    Australia Institute, Submission 31.1, pp. 10-11.
  • 68
    Dr Mathieu O'Neil Associate Professor of Communication, News and Media Research Centre, University of Canberra, Committee Hansard, 25 September 2020, p. 4.
  • 69
    Dr Carlo Kopp, Submission 21, pp. 8-9.
  • 70
    Mr Lachlan Colquhoun, First Assistant Secretary, Department of the Prime Minister and Cabinet, Committee Hansard, 11 December 2020, p. 14.
  • 71
    Ms Katherine Mansted, Committee Hansard, 22 June 2020, p. 17.
  • 72
    Ms Katherine Mansted, Committee Hansard, 22 June 2020, p. 17.
  • 73
    Responsible Technology Australia, Submission 17, p. 3.
  • 74
    Allens Hub for Technology, Law and Innovation; the Datafication and Automation of Human Life; and the Society on Social Implications of Technology (Allens Hub et al), Submission 19, pp. 2-3
  • 75
    Australian Muslim Advocacy Network (AMAN), Submission 24, p. 1.
  • 76
    AMAN, Submission 24, p. 1.
  • 77
    AMAN, Submission 24, p. 1.
  • 78
    Dr Jake Wallis, ASPI, Committee Hansard, 22 June 2020, p. 13. See also Dr Jake Wallis and Mr Thomas Uren, ASPI, Submission 2, p. 1.
  • 79
    Dr Jake Wallis, ASPI, Committee Hansard, 22 June 2020, p. 13.
  • 80
    Dr Jake Wallis, ASPI, Committee Hansard, 22 June 2020, p. 13.
  • 81
    Responsibly Technology Australia, Submission 17, p. 11.
  • 82
    Responsibly Technology Australia, Submission 17, p. 11.
  • 83
    Dr Richard Johnson, First Assistant Secretary, Social Cohesion Division, Department of Home Affairs described how the Department of Home Affairs, Committee Hansard, 11 December 2020, p. 9 and p. 2.
  • 84
    Mr Nathaniel Gleicher, Facebook, Committee Hansard, 30 July 2021, p. 9.
  • 85
    See Sora Park, Caroline Fisher, Kieran McGuinness, Jee Young Lee, and, Kerry McCallum, Digital news report: Australia 2021, June 2021, p. 10.
  • 86
    Mr Robert McKinnon, Assistant Secretary, Department of Foreign Affairs and Trade (DFAT), Committee Hansard, 11 December 2020, p. 18.
  • 87
    Professor Kerry McCallum, Director, News and Media Research Centre, University of Canberra, Committee Hansard, 25 September 2020, p. 1.
  • 88
    Dr Bruce Arnold and Dr Benedict Sheehy, University of Canberra, Submission 7, pp 3-4.
  • 89
    Dr Bruce Arnold and Dr Benedict Sheehy, University of Canberra, Submission 7, pp 3-4.
  • 90
    Ms evelyn douek, Committee Hansard, 22 June 2020, p. 5.
  • 91
    Dr Michael Jensen, Committee Hansard, 25 September 2020, p. 2.
  • 92
    Dr Michael Jensen, Committee Hansard, 25 September 2020, p. 2.
  • 93
    Professor Kerry McCallum, University of Canberra, Committee Hansard, 25 September 2020, p. 2.
  • 94
    Australia Institute, Submission 31, p. 17.
  • 95
    Dr Jake Wallis and Mr Thomas Uren, ASPI, Submission 2, p. 3.
  • 96
    Sora Park, Caroline Fisher, Kieran McGuinness, Jee Young Lee, and, Kerry McCallum, Digital news report: Australia 2021, June 2021, p. 9.
  • 97
    Sora Park, Caroline Fisher, Kieran McGuinness, Jee Young Lee, and, Kerry McCallum, Digital news report: Australia 2021, June 2021, p. 9.
  • 98
    Ms Melissa Harrison, Submission 5, p. 1; and the Australian Citizens Party, Submission 9, p. 1.
  • 99
    Australian Citizens Party, Submission 9, p. 1.
  • 100
    Google Australia, Submission 23, p. 4.
  • 101
    Google Australia, Submission 23, p. 4.
  • 102
    Mr Nathaniel Gleicher, Facebook, Committee Hansard, 30 July 2021, p. 4.
  • 103
    Mr Nathaniel Gleicher, Facebook, Committee Hansard, 30 July 2021, p. 4.
  • 104
    See Dr Jake Wallis and Mr Thomas Uren, ASPI, Submission 2, pp. 5-6; Dr Bruce Arnold and Dr Benedict Sheehy, University of Canberra, Submission 7, p. 7; Responsible Technology Australia, Submission 17, p. 1; AMAN, Submission 24, p. 2; Principle Co, Submission 25, p. 7; Law Council of Australia, Submission 18, p. 1 and p. 30; and Ms evelyn douek, Committee Hansard, 22 June 2020, p. 2.
  • 105
    Ms evelyn douek, Committee Hansard, 22 June 2020, p. 2.
  • 106
    AMAN, Submission 24, p. 2.
  • 107
    Dr Jake Wallis and Mr Thomas Uren, ASPI, Submission 2, pp. 5-6.
  • 108
    Dr Bruce Arnold and Dr Benedict Sheehy, University of Canberra, Submission 7, p. 7.
  • 109
    Responsible Technology Australia, Submission 17, p. 1.
  • 110
    Law Council of Australia, Submission 18, p. 1.
  • 111
    Law Council of Australia, Submission 18, p. 30.
  • 112
    Facebook, Submission 27, p. 9.
  • 113
    Facebook, Submission 27, p. 17.
  • 114
    TikTok, Submission 26, p. 2.
  • 115
    Mrs Lucinda Longcroft, Google Australia, Committee Hansard, 30 July 2021, p. 14.
  • 116
    Twitter, Submission 20, p. 5.
  • 117
    Twitter, Submission 20, p. 5.
  • 118
    Twitter, Submission 20, pp. 5-6.
  • 119
    Twitter, Submission 20, p. 6.
  • 120
    Twitter, Submission 20.1, p. 7.
  • 121
    Twitter, Submission 20.1, p. 7.
  • 122
    Twitter, Submission 20, p. 7.
  • 123
    Dr Jake Wallis, ASPI, Committee Hansard, 22 June 2020, p. 10.
  • 124
    Dr Michael Jensen, Committee Hansard, 25 September 2020, pp. 2-3.
  • 125
    Mr Alex Stamos, Director, Stanford Internet Observatory, Committee Hansard, 22 June 2020, p. 7.
  • 126
    Ms Katherine Mansted, Committee Hansard, 22 June 2020, p. 16.
  • 127
    Dr Carlo Kopp, Submission 21, p. 11.
  • 128
    Dr Richard Johnson, Department of Home Affairs, Committee Hansard, 30 July 2021, p. 41. Dr Johnson further noted that the government has invested in community liaison officers, who have the capability to translate non-English language material.
  • 129
    Mr Alex Stamos, Director, Stanford Internet Observatory, Committee Hansard, 22 June 2020, p. 2.
  • 130
    Dr Jake Wallis, ASPI, Committee Hansard, 22 June 2020, p. 11.
  • 131
    Dr Richard Johnson, Department of Home Affairs, Committee Hansard, 30 July 2021, p. 40.
  • 132
    Dr Carlo Kopp, Submission 21, p. 3.
  • 133
    Dr Carlo Kopp, Submission 21, p. 5.
  • 134
    Ms evelyn douek, Committee Hansard, 22 June 2020, p. 2.
  • 135
    Ms evelyn douek, Committee Hansard, 22 June 2020, p. 2.
  • 136
    Facebook, Submission 27, p. 3 and p. 9.
  • 137
    Mr Josh Machin, Facebook, Committee Hansard, 30 July 2021, p. 4.
  • 138
    Mr Josh Machin, Facebook, Committee Hansard, 30 July 2021, p. 4.
  • 139
    Twitter, Submission 20.1, pp. 17-18.
  • 140
    Australia Institute, Submission 31, p. 2.
  • 141
    Australia Institute, Submission 31, p. 2.
  • 142
    Google Australia, Submission 23, p. 4.
  • 143
    Google Australia, Submission 23, p. 4.
  • 144
    Mr Richard Salgado, Director, Law Enforcement and Information Security, Google, Committee Hansard, 30 July 2021, p. 12.
  • 145
    Mrs Lucinda Longcroft, Google Australia, Committee Hansard, 30 July 2021, p. 12.
  • 146
    Ms Kara Hinesley, Twitter, Committee Hansard, 30 July 2021, p. 48.
  • 147
    Twitter, Submission 20, p. 9.
  • 148
    Twitter, Submission 20, p. 11.
  • 149
    Twitter, Submission 20, p. 13.
  • 150
    Twitter, Submission 20, p. 14.
  • 151
    Twitter, Submission 20, p. 14.
  • 152
    Twitter, Submission 20, p. 14.
  • 153
    Twitter, Submission 20, p. 14.
  • 154
    Twitter, Submission 20, p. 14.
  • 155
    Twitter, Submission 20, p. 14.
  • 156
    Twitter, Submission 20, p. 14.
  • 157
    Twitter, Submission 20, pp. 16-17.
  • 158
    Mrs Lucinda Longcroft, Google Australia, Committee Hansard, 30 July 2021, p. 12.
  • 159
    Mrs Lucinda Longcroft, Google Australia, Committee Hansard, 30 July 2021, p. 12.
  • 160
    TikTok, Submission 26, p. 5.
  • 161
    TikTok, Submission 26, p. 5.
  • 162
    TikTok, Submission 26, p. 5.
  • 163
    WeChat, Submission 30, p. 3.
  • 164
    WeChat, Submission 30, p. 3.
  • 165
    Facebook, Submission 27, p. 9.
  • 166
    Facebook, Submission 27, p. 10.
  • 167
    Facebook, Submission 27, pp. 10-11.
  • 168
    Facebook, Submission 27, p. 12.
  • 169
    Facebook, Submission 27, p. 12.
  • 170
    Facebook, Submission 27, p. 12.
  • 171
    Facebook, Submission 27, p. 13.
  • 172
    Facebook, Submission 27, p. 14.
  • 173
    Facebook, Submission 27, p. 15.
  • 174
    Facebook, Submission 27, p. 15.
  • 175
    Facebook, Submission 27, p. 16.
  • 176
    Special Broadcasting Service, Submission 6, p. 3.
  • 177
    Mr Lachlan Colquhoun, First Assistant Secretary, Department of the Prime Minister and Cabinet, Committee Hansard, 11 December 2020, p. 2
  • 178
    Mr Neil Hawkins, Department of Home Affairs, Committee Hansard, 11 December 2020, p. 3.
  • 179
    Dr Richard Johnson, Department of Home Affairs, Committee Hansard, 11 December 2020, p. 3.
  • 180
    Dr Richard Johnson, Department of Home Affairs, Committee Hansard, 11 December 2020, p. 3.
  • 181
    Mr Robert McKinnon, Department of Foreign Affairs and Trade, Committee Hansard, 11 December 2020, p. 18.
  • 182
    Mr Robert McKinnon, Department of Foreign Affairs and Trade, Committee Hansard, 11 December 2020, p. 19.
  • 183
    Mr Robert McKinnon, Department of Foreign Affairs and Trade, Committee Hansard, 11 December 2020, p. 19.
  • 184
    Dr Richard Johnson, Department of Home Affairs, Committee Hansard, 30 July 2021, p. 40.
  • 185
    Dr Richard Johnson, Department of Home Affairs, Committee Hansard, 30 July 2021, p. 40.
  • 186
    Dr Richard Johnson, Department of Home Affairs, Committee Hansard, 30 July 2021, p. 40.
  • 187
    Dr Richard Johnson, Department of Home Affairs, Committee Hansard, 30 July 2021, p. 40.
  • 188
    Joint Committee on Law Enforcement, Vaccine related fraud and security risks, August 2021, p. 34.
  • 189
    Joint Committee on Law Enforcement, Vaccine related fraud and security risks, August 2021, p. 34.
  • 190
    Joint Committee on Law Enforcement, Vaccine related fraud and security risks, August 2021, p. v.

 |  Contents  |