Chapter 3 - International issues

Chapter 3International issues

3.1As outlined in Chapter 1, foreign interference, through social media and other methods, is an issue affecting liberal democracies around the world. Thischapter outlines some of the main countermeasures being undertaken in nations and regions which are key security partners of Australia: the EuropeanUnion(EU) and the nations which—with Australia—make up the 'Five Eyes' intelligence alliance: the United States, Canada, United Kingdom and New Zealand.

3.2Finally, the chapter outlines security risks in the Pacific region and looks at a few examples of international digital literacy campaigns.

Europe

3.3In recent years, the EU has been contending more and more with the issue of social media and cyber-enabled foreign interference, alongside the related threats of cyber-security, data security and privacy. As early as 2015, the European Council was publicly stressing 'the need to challenge Russia's ongoing disinformation campaigns' and called for 'an action plan on strategic communication'.[1]

European Union Agency for Cyber Security

3.4The European Union Agency for Cyber Security (ENISA), published a report in December 2022, Foreign Information and Interference (FIMI) and Cybersecurity—Threat Landscape. The report outlines how the concept of FIMI has been proposed as a response to the call of the European Democracy Action Plan for 'refined common definitions and methodologies in order to address different categories of disinformation and influence activities.' The report states:

Although disinformation is a prominent part of FIMI, FIMI puts emphasis on manipulative behaviour, as opposed to the truthfulness of the content being delivered … one of the main motivations behind this report is to identify ways to bring the cybersecurity and counter-FIMI communities closer together. The ambition is to provide an input to the on-going and ever pressing discussion on the nature and dynamics of information manipulation and interference, including disinformation, and on how to collectively respond to this phenomenon. The report proposes and tests an analytical approach describing FIMI and manipulation of information, as well as the underlying cybersecurity elements, by combing practices from both domains.[2]

3.5This approach is now replicated across many EU agencies, which tend to define interference activities based on outcomes sought rather than tactics being used: focus is directed to the overall goals of the interference—manipulation of the information environment—as opposed to whether the information itself is misinformation or disinformation.

European Parliament

Special committee on Foreign Interference

3.6The first Special Committee on Foreign Interference in all Democratic Processes in the European Union, including Disinformation (INGE 1), was established in 2020. INGE 1 was tasked to report on factual findings and recommendations concerning the measures and initiatives to be taken in countering foreign interference and disinformation.

3.7After eighteen months of work—including 50 hearings with over 130 invitees—the report of the committee identified and mapped the threat of foreign interference in all its forms and provided a diagnosis of the EU's vulnerabilities and recommendations for strengthening the EU's resilience.[3]

3.8INGE 2 was established in 2022 with a revised mandate to follow up on the implementation of the INGE 1 report, and to engage in a dialogue with policy makers at the national, the European and the international levels to contribute to the overall institutional resilience against foreign interference, hybrid threats and disinformation in the run-up to European elections in 2024.

3.9Since May 2022, INGE 2 has spoken with as many as two dozen experts and policy makers. In order to best focus on institutional and legislative resilience building in the run-up to European elections in 2024, INGE 2 established a close cooperation with North Atlantic Treaty Organization StratCom in Riga (Latvia), the Hybrid Centre of Excellence in Helsinki (Finland), with the AustralianGovernment, and authorities and respective bodies at the United Nations in New York.

3.10The INGE 2 report included recommendations and updates on the EU's coordinated strategy against foreign interference; on EU resilience building; on interference using online platforms; on the critical infrastructure and strategic sectors; on interference during electoral processes; on covert funding of political activities by foreign actors and donors; on cybersecurity and resilience of democratic processes; on the impact of interference on the rights of minorities and other vulnerable groups; on deterrence, attribution and collective countermeasures, including sanctions; and on neighbourhood policy, global cooperation, and multilateralism.[4]

EU Digital Services Act

3.11Passed in 2022 and in force from 1 January 2024, the Digital Services Act updates and harmonises the legal framework for regulating illegal content – including disinformation – on digital intermediaries across the EU and coregulates platforms along with the 2022 Code of Practice on Disinformation (EU code). The Digital Services Act requires all online platforms with more than 45million European users (known as Very Large Online Platforms, or VLOPs) to have measures in place to mitigate risks from the spread of illegal content', with 'different purposes and enforcement remedies [through the EU code], depending on whether an organisation is a VLOP or another category of organisation'.[5]

European Commission—tackling online disinformation

3.12As part of its European Democracy Action Plan and complementary to the GeneralData Protection Regulation, the European Commission (EC) issued a Communication on Tackling online disinformation: a European approach in 2018.[6] Aspart of this approach, the EC committed to a range of initiatives to address disinformation across the EU, including through making online platforms more accountable, such as:

recognition of the voluntary EU code, strengthened further in 2022, including misinformation into the code scope, and measures demonetising pedlars of disinformation, transparency of political advertising, reducing manipulative behaviour, better informing users, expanding fact-checking coverage, and improving transparency and reporting;

developing the Action Plan against Disinformation[7]; and

establishing the European Digital Media Observatory—independent fact checkers and independent researchers.

3.13In February 2023 a Transparency Centre was announced including baseline reports on actions taken by major online platforms in response to disinformation.[8] In relation to the baseline reports, the EC noted that:

Most major online platforms (Google, Meta, TikTok and Microsoft) demonstrated strong commitment to the reporting, providing an unprecedented level of detail about the implementation of their commitments under the Code, and—for the first time—data at Member State level. Twitter, however, provides little specific information and no targeted data in relation to its commitments.[9]

3.14Reports from online platforms referenced foreign interference matters, including in relation to elections, the 'war of aggression by Russia on Ukraine', and Commitment 16 relating to cross-platform influence operations and foreign interference.[10]

European External Action Service

3.15The European External Action Service (EEAS) is the EU diplomatic service. In 2015, the EEAS established its own disinformation unit—East StratCom Task Force—following the calls outlined above for greater action against disinformation. Since then, a number of reports and initiatives have been undertaken, as outlined below.

EUvsDISINFO

3.16In 2015, East StratCom Task Force launched a project, EUvsDiSInfo, 'to increase public awareness and understanding of the Kremlin's disinformation operations, and to help citizens in Europe and beyond develop resistance to digital information and media manipulation'. EUvsDiSInfo states that it uses 'data analysis and media monitoring services in 15 languages, EUvsDisinfo identifies, compiles, and exposes disinformation cases originating in proKremlin media that are spread across the EU and Eastern Partnership countries'.[11]

EEAS Report on Foreign Information Manipulation and Interference Threats

3.17In June 2022, a workshop of high-level experts from civil society, industry, and government was convened to take stock of best practices in the FIMI analyst community by the Carnegie Endowment for International Peace. Thatworkshop identified a key barrier to addressing FIMIwas the lack of agreed upon definitions and analytical standards for analysing and reporting on FIMI.[12]

3.18Following on from that workshop, the EEAS released the 1st EEAS Report on Foreign Information Manipulation and Interference Threats (EEAS report) in February 2023 to:

… provide the FIMI defender community with a proof-of-concept for a common framework that enables mutual sharing of complex insights in a timely fashion and at scale. This is done to create a common understanding and formulate a collective, systematic response to FIMI.[13]

3.19The EEAS report had three main components:

a pilot FIMI threat analysis on priority actors and issues in 2022;

a behaviour-first approach to FIMI detection and analysis, as well as the linked DISARM 'Kill Chain' perspective on FIMI; and

an analytical framework for FIMI threat analysis.[14]

3.20The EEAS report proposed using the Disinformation Analysis and Risk Management (DISARM) open-source framework, which is:

… designed for describing and understanding the behavioural parts of FIMI/disinformation. It sets out best practices for fighting disinformation through sharing data & analysis, and can inform effective action. The Framework has been developed, drawing on global cybersecurity best practices.[15]

3.21 The DISARM framework builds on the ABCDE framework of differentiating FIMI incidents 'in terms of actors, behaviours, content, degree, and effect' which is seen as a 'helpful mnemonic for both investigators and readers to check whether an analysis covers every important aspect of a FIMI incident'. TheDISARM framework goes further, to articulate an analysis cycle that is selfreinforcing, in that 'the more often the analytical workflow is applied, the better informed each research iteration will become':

Figure 3.1Self-reinforcing workflow for strategically analysing incidents of Foreign Information Manipulation and Interference

Source: European Union External Action Service, 1st EEAS Report on Foreign Information Manipulation and Interference Threats, p. 27.

3.22The EEAS report noted the importance of taking a systematic approach to developing countermeasures, as well as then measuring the success of different strategies.[16]

3.23The EEAS report also emphasised the importance of ensuring countermeasures are appropriate to the seriousness of the particular threat, noting that the point of countering FIMI is to protect the integrity of the democratic process and universal values. For example, where anonymity is a key tactic of a FIMI actor, government responses must 'weigh whether challenging that central feature of the internet is proportionate to tackling the risk'.[17]

3.24The EEAS report identified a range of countermeasures that can be taken to address FIMI:

Statement of Refutal: An involved entity issued a statement refuting the claims of the incident.

Debunking: The claims of the incident were debunked/fact-checked.

Content Deleted: Content of any type was taken down in response to the incident.

Content Confined: Content of any type was limited in response to the incident.

Channel Limited or Suspended: The channel of any of the observables was limited or suspended in response to the incident.

Other: Any other counter measure that is not captured by the above taxonomy.[18]

3.25The EEAS report highlighted an additional tactic to the above, which is to use a 'Kill Chain' approach as modified by the DISARM Foundation for use against FIMI. This approach recognises that a single FIMI incident has many necessary steps—planning, creating and disseminating content—and to deny a threat actor any one of those steps effectively kills off the attack:

This approach builds upon positive experience in cybersecurity, where the forensic analysis of threat actor behaviour throughout the entire timeline of their attempted attack has helped to better understand systemic vulnerabilities, and how to spot and close their exploitation to prevent the infiltration of and damage to computer systems.[19]

3.26Most importantly, the Kill Chain approach acknowledges that each step of the FIMI chain requires not only different approaches to detection and analysis, but also requires different countering strategies and techniques.[20]

3.27Figure 3.2 below shows the four identified stages of a FIMI incident, each of which then allows for a different strategy and techniques to disable that stage and 'kill' the entire incident:

Figure 3.2DISARM Foundation—Kill Chain

Source: EEAS, 1st EEAS Report on Foreign Information Manipulation and Interference Threats, p. 29.

Visualisation of the DISARM framework's threat actor Kill Chain (Red Team). Red dots represent the overarching tactics (TA) at a given stage, blue dots show examples of techniques (T) used under a given tactic.

3.28The EEAS report noted that in order for a Kill Chain approach to be helpful, it must be supported by objective analysis of the behaviour and tactics, techniques and procedures (TTPs) threat actors use, as well as systematic development and measurement of disruptive responses.[21]

3.29The EU Agency for Cyber Security noted that having a community of multiple FIMI countermeasure actors from different sectors enables any response to be tailored by choosing the best political actor to respond to a specific FIMI event.[22]

3.30The EEAS report also found that large-scale collaboration required a 'common data format for sharing threat information' that was 'able to represent all fundamental building blocks (or objects) of a threat and express how they are related to one another'. The report further noted of FIMI data standards:

It would also have to be flexible enough to remain useful while the threat keeps evolving, but stable enough to allow for the adaptation of processes and the building of tools on top of it. Lastly, it should follow the same principles of openness, community involvement and universality as commonly shared taxonomies to find wide adoption.[23]

3.31The report recommended the adoption of the existing Structured Threat Information Expression (STIX™) format used for cyber threat information, and suggested it be updated to STIX2.1 with custom extensions needed for idiosyncratic FIMI threat indicators.[24]

3.32Notably, on 31 May 2023, the EU-United States (US) Trade and Technology Council announced it has adopted the DISARM and STIX2 frameworks as 'a common standard for exchanging structured threat information on FIMI':

This standard that the European Union and the United States are now using to analyse FIMI and share information is comprised of the DISARM framework, the STIX2 standard and the OpenCTI [open cyber threat intelligence] platform. This approach will significantly strengthen our collective efforts to identify, analyse and counter FIMI by enhancing our common situational awareness of FIMI threats. At the same time, this standard and its elements are made up of open-source solutions, which is key to ensure an approach that can be used by stakeholders around the globe.[25]

Information Sharing and Analysis Center

3.33Announced on 7 February 2023, the EU has launched a new platform to counter disinformation campaigns by Russia and China. The Information Sharing and Analysis Center within the EEAS will track information manipulation by foreign actors and coordinate with the 27 EU countries and the wider community of non-government organisations (NGOs). It will be a decentralised platform to exchange information in real-time with NGOs, countries and cybersecurity agencies, enabling better understanding of emerging disinformation threats and narratives and quicker action to tackle such problems.[26]

TikTok in the EU

TikTok bans

3.34The European Parliament, EC, and the EU Council, the three top EU bodies, have all banned TikTok on staff devices, citing cybersecurity concerns. Individual EU nations that have imposed similar bans as of 4April2023, include France, the Netherlands, Norway and Denmark.[27]

Project Clover

3.35As part of its efforts to allay fears about misuse of user data and privacy concerns, TikTok has begun what it refers to as Project Clover, which the company said in an 8 March 2023 statement builds on its data approach in the US, writing:

… we are further enhancing these controls by introducing security gateways that will determine employee access to European TikTok user data and data transfers outside of Europe. This will add another level of control over data access. Any data access will not only comply with the relevant data protection laws but also have to first go through these security gateways and additional checks.[28]

3.36TikTok also reported that it was working on additional privacy enhancing technologies that included 'pseudonymisation of personal data so that an individual cannot be identified without additional information and aggregation of individual data points into large data sets to protect the privacy of individuals'.

3.37TikTok also announced a number of European data centre sites in Europe as part of what it said was its 'commitment to store European TikTok user data locally', outlining that it would begin storing European user data locally in 2023, with migration continuing into 2024, at an overall cost of €1.2 billion.[29]

Agreement between EU and United States on transatlantic cooperation

3.38In May 2023, the EU and the US announced a number of actions to increase transatlantic cooperation to proactively address FIMI and disinformation:

Common methodology for identifying, analysing and countering FIMI

The EU and the US have adopted a common standard for exchanging structured threat information on FIMI, through a more interoperable and machine-readable approach. When fully operational, information will be shared more efficiently, effectively and with a greater level of detail when it comes to understanding the manipulative tactics, techniques and procedures. This standard that the EU and the US are now using to analyse FIMI and share information is comprised of the DISARM framework, the STIX2 standard and the OpenCTI platform.

Enhancing the preparedness against FIMI in third countries together with Civil Society Organisations (CSOs) and platforms

The EU and the US organised several workshops to bring together civil society organisations, academic institutions, and media outlets from Africa and Latin-America, as well as platforms active in these regions, to explore how a multi-stakeholder community can step up its actions in coordinating the response to FIMI.

The EU and the US intend to further enhance support for capacity building in third countries, including by exploring additional actions to support and reinforce civil society and fact-checking organisations that facilitate the fight against FIMI on online platforms through our respective development funding mechanisms.

Call for action to platforms

The EU and the US also called upon online platforms to ensure the integrity of their services and to effectively respond to disinformation and FIMI, building on the example of the EU's updated Code of Practice on Disinformation.[30]

United States

Foreign Malign Influence Centre

3.39The Foreign Malign Influence Centre was established in September 2022. It is housed under the Director of National Intelligence and 'serves as the primary U.S. Government organization for analysing and integrating all intelligence and other reporting possessed or acquired pertaining to foreign malign influence, including election security'.[31]

Department of State—Global Engagement Centre

3.40The mission of the Global Engagement Centre of the Department of State is to 'direct, lead, synchronize, integrate, and coordinate U.S. Federal Government efforts to recognize, understand, expose, and counter foreign state and non-state propaganda and disinformation efforts aimed at undermining or influencing the policies, security, or stability of the United States, its allies, and partner nations'.

3.41It does this through five areas of work:

(a)Analytics and Research: collects data to produce analysis on foreign malign information influence narratives, tactics, and techniques, which is then shared with domestic and international partners.

(b)International Partnerships: with other national governments for the purpose of coordinating counter-disinformation analyses and actions.

(c)Programs and Campaigns: has issue specific teams to build societal and institutional resilience to foreign propaganda and disinformation efforts.

(d)Exposure: has a coordination role in the exposure of foreign information influence operations.

(e)Technology Assessment and Engagement: assesses counter-disinformation technologies against specific challenges, and identifies technological solutions through technology challenge programs.[32]

TikTok & WeChat IEEPA Orders

3.42In August 2020, President Trump attempted to use powers under the International Emergency Economic Powers Act to ban TikTok and WeChat in the US, or to force ByteDance's divestment of TikTok to a nonChinese company. Acourt case struck down the Executive Orders, which were subsequently withdrawn by President Biden.[33]

RESTRICT ACT

3.43In response to the executive orders being struck out by legal challenges, on 7March 2023 a bipartisan group of 13 US Senators introduced a bill that would create the Restricting the Emergence of Security Threats that Risk Information and Communications Technology (RESTRICT) Act, which has been referred to a US Senate Committee for inquiry. The RESTRICT Bill creates an entirely new legislative framework that would allow the Secretary of Commerce to review and regulate foreign-linked tech firms for national security risks and then develop mitigating options.[34]

TikTok ban

3.44In late December 2022, the US government approved a ban on the use of TikTok on government owned devices. Reporting from February 2023 indicates TikTok has been banned from state government owned devices in half of the states, and a number of universities in the US are also reported to have banned the use of TikTok on their networks.[35]

Project Texas

3.45Project Texas is similar to Project Clover, in that the US $1.5 billion project with technology company Oracle is designed to ensure that TikTok US user data remains stored in the US. However, whistleblowers have raised concerns that this project will not genuinely protect data from being accessed by the ChineseCommunist Party (CCP).[36]

United Kingdom

Defending Democracy Taskforce

3.46In November 2022, the United Kingdom (UK) established the DefendingDemocracy Taskforce, with its primary focus being 'to protect the democratic integrity of the UK from threats of foreign interference'. Thosethreats were listed as:

… foreign interference in our elections and electoral processes; disinformation; physical and cyber threats to our democratic institutions and those who represent them; foreign interference in public office, political parties and universities; and transnational repression in the UK.[37]

3.47The taskforce reports into the UK's National Security Council.[38]

Counter Disinformation Unit

3.48The UK has a dedicated Counter Disinformation Unit that operates across government. Established in early 2020, its primary role at that time was to counter false COVID-19 narratives. However, it is now focused more widely on content targeted at UK audiences which poses a risk to public health, public safety, or national security, such as false COVID-19 'cures' or disinformation related to the Russian invasion of Ukraine.[39]

UK's TikTok ban

3.49On 16 March 2023, TikTok was banned from being installed on government devices, after Cabinet Office Ministers ordered a security review. The statement announcing the ban highlighted that:

TikTok requires users to give permission for the app to access data stored on the device, which is then collected and stored by the company. Allowing such permissions gives the company access to a range of data on the device, including contacts, user content, and geolocation data.

The government, along with our international partners, is concerned about the way in which this data may be used.[40]

National Security Act 2023

3.50The National Security Act 2023 came into effect on 11 July 2023 and 'introduces new measures to modernise counter-espionage laws and address the evolving state threat to national security'. The Home Office (UK) stated:

The act introduces an offence of foreign interference, meaning it will now be illegal in the to engage in conduct that interferes with fundamental rights, such as voting and freedom of speech, that are essential to the UK's democracy.

3.51The act also introduces a Foreign Influence Registration Scheme, similar to that operating in Australia.[41]

Online Safety Bill

3.52The Online Safety Bill, which as of July 2023 is before the House of Lords, integrates closely with the National Security Act 2023 described above. It does this by designating foreign interference offences under the National Security Act as 'priority offences' under the Online Safety Bill. The UK Government described the effect of this amendment to the Online Safety Bill on 5 July 2022:

It means social media platforms, search engines and other apps and websites allowing people to post their own content will have a legal duty to take proactive, preventative action to identify and minimise people's exposure to state-sponsored or state-linked disinformation aimed at interfering with the UK.[42]

Canada

3.53Electoral foreign interference, hybrid threats and cyber-enabled social media operations, including the use of cyber-attacks, have also been part of Chinese foreign interference techniques recently identified in Canada. This follows the leak in early 2023 of government intelligence documents by a whistleblower to the Globe and Mail newspaper, which reported:

Highly classified CSIS [Canadian Security Intelligence Service]documents seen by The Globe paint a picture of a broad Chinese strategy to interfere in Canada's democracy and gain influence over politicians, corporate executives, academics and vulnerable Chinese Canadians. The documents reveal that Beijing's ruling Chinese Communist Party uses three colourcoded 'political-interference tactics' to gain influence over Canadians here and those travelling to China.[43]

3.54The CSIS Public Report 2021 also identifies the role cyber-attacks are playing in foreign interference campaigns, including on social media, providing an infographic and outlining that:

State-sponsored disinformation campaigns represent one of many vectors of foreign interference and hostile states have been involved in actively spreading disinformation in an effort to discredit our government institutions, negatively impact social cohesion and gain influence for their own strategic objectives.[44]

Foreign Influence Registry

3.55Canada has recently begun public consultation on the creation of a foreign influence registry, and in this process Australia's Foreign Influence Transparency Scheme has been included in the benchmarking that accompanied the consultation document.[45] Canada also appears to be following a similar approach to Australia and considering appointing a National Counter Foreign Interference Coordinator position within Public Safety Canada.[46]

TikTok ban

3.56Following a review of TikTok from the Chief Information Officer of Canada, in February 2023 the Canadian Government banned TikTok on government-issued devices. As has been the case with other jurisdictions, the public messaging in relation to the bans has been heavily framed in relation to the cyber security and privacy threat the app poses instead of foreign interference considerations.[47]

New Zealand

Concern for 2023 elections

3.57On 27 March 2023, media reported that 'New Zealand intelligence agencies are growing more concerned about both foreign interference and malicious cyber activity ahead of elections in October'. Phil McKee, Acting Director-General of Security, said the threat of foreign interference and espionage is a growing cause for concern, adding information was being collected on those who speak out against foreign governments and their families being threatened in their home countries.[48]

3.58On 10 March 2023 New Zealand media reported on comments made by a member of the government's Independent Electoral Review Panel, in which it was indicated that the potential for artificial intelligence to be used to spread misinformation and disinformation in the lead-up to the October 2023 election was a topic under consideration.[49]

3.59Earlier in July 2021, statements made by the New Zealand Director-General of Security to the Justice Committee Inquiry into the 2020 General Election and Referendums, outlined that 'state actors pursuing their strategic goals through disinformation is an area of growing international concern' and went on to note that:

We assess that New Zealand has not been the direct target of widespread state-backed disinformation campaigns.But given the nature of global online content, members of the New Zealand public are highly likely to encounter disinformation. This means that disinformation campaigns occurring overseas may affect levels of trust in the media and government here.[50]

TikTok ban on parliamentary devices

3.60On 17 March 2023, the parliamentary service chief executive RafaelGonzalezMontero wrote to New Zealand MPs that the TikTok application would be banned from parliamentary work devices after determining that the ‘risks are not acceptable in the current New Zealand Parliament environment’. This decision was based on the parliamentary service’s analysis and discussions with colleagues across government and internationally.[51]

Pacific region

3.61A recent report by the Australian Strategic Policy Institute noted that the IndoPacific region 'contains an arc of emergent democracies vulnerable to coercive statecraft and foreign interference and further noted that some 'have limited civil society organisations or independent media'. The report recommended:

The US and Australian governments should invest more in fostering democratic resilience in a region that has high levels of digital penetration. Many countries in the Indo-Pacific are coming from a low base in terms of their capacity to engage with hybrid threats in the security landscape ... There's much that the US and Australia can do to build capacity and resilience among partners in the region. A construct such as a regional hybrid threats centre could provide an effective vehicle to bring together this cross-section of stakeholders—from governments, industry and civil society—to engage with the diversity of the region and the diversity of threats.[52]

3.62Dr William Stoltz of the Australian National University, also noted that work needs to be done by Australia to support countermeasures against foreign interference in the Pacific region:

… there is arguably a greater problem concerning foreign interference on social media in the countries that are closest to us and most key to our interests. This is particularly those countries in the South Pacific, where there isn't necessarily the fulsome law enforcement and digital infrastructure that we might have to provide online safety. I would say that a more urgent priority for Australia, alongside our own jurisdiction, is to be investing in the digital literacy and the infrastructure of developing countries in our region, which we know are the targets of quite comprehensive information operations by other states. Those countries, just through the underdeveloped nature of their institutions, are more vulnerable to these malicious information operations.[53]

3.63Meta advised the committee of work it is undertaking in the Pacific region, via its Pacific Islands advisory group:

[W]e've been able to develop a locally relevant digital campaign which advises and educates people across a number of different channels. It's done in a range of local languages and in ways that, hopefully, are compelling to people, to give them some tips, and in ways to provoke them to think about what they're sharing and the content they're engaging with. We've also got teams that work across the region on broader digital literacy campaigns as well.[54]

International digital literacy campaigns

3.64This section outlines key digital literary campaigns that have taken place abroad, many of which seek to mitigate the impact of foreign interference, misinformation and disinformation online. These campaigns could act as models for a prospective Australian public education campaign.

United States

3.65The Federal Bureau of Investigation's Protected Voices Initiative provides information to the public on the nature of foreign cyberattacks aimed at influencing American political processes. The Department of Homeland Security's Cybersecurity and Infrastructure Security Agency (CISA) have also published a range of infographics and documents aimed at raising awareness and educating individuals on how to resist foreign interference through social media.[55]

3.66In anticipation of the 2020 Presidential election, CISA launched the '#Protect2020' campaign. The campaign involved support for election officials in identifying and planning for vulnerabilities within election infrastructure.[56]

United Kingdom

3.67The UK Government has developed disinformation tools and campaigns including:

The 'Don't Feed the Beast' campaign educates the public on the dangers of sharing inaccurate information online.

A 'train the trainer' plan which equips teachers, library workers and carers with skills needed to assess information viewed online.

A toolkit to help communicators manage the impact disinformation could have on their organisation.[57]

European Union

3.68The European Commission has organised a range of initiatives to increase levels of digital and media literacy, such as the 2019, European Media Literacy Week including 320 events across the continent and the Social Observatory for Disinformation and Social Media Analysis which 'provide strategies and actions to increase media literacy'.[58]

3.69As outlined earlier in this chapter, the EUvsDisinfo campaign aims to increase public awareness of 'disinformation operations and campaigns and increase resistance to digital information and media manipulation'.[59]

Canada

3.70The Canadian Government has developed the Digital Citizen Initiative to 'support democracy and social cohesion in Canada by building citizens' resilience against online disinformation and building partnerships to support a healthy information ecosystem'.[60]

Ukraine

3.71In 2015, the Ukrainian Government introduced the Learn to Discern program in schools to improve young people's media literacy particularly in the context of propaganda campaigns led by the Russian Government which sought to destabilise Ukrainian public institutions.[61]

3.72The success of the program led to its adoption in other countries and some submitters recommending that Australia adopt a similar program here.[62]

Footnotes

[1]European Council conclusions, 20 March 2015, EUCO 11/15, p. 5.

[2]European Union Agency for Cyber Security, Foreign Information and Interference (FIMI) and Cybersecurity—Threat Landscape, December 2022, p. 4.

[3]Special Committee on foreign interference in all democratic processes in the European Union, including disinformation (INGE 1), REPORT on foreign interference in all democratic processes in the European Union, including disinformation,(2020/2268(INI).

[4]Special Committee on foreign interference in all democratic processes in the European Union, including disinformation (INGE 2), REPORT on foreign interference in all democratic processes in the European Union, including disinformation, 2022/2075(INI).

[5]Australian Communications and Media Authority, Submission 6, p. 3; Department of Infrastructure, Transport, Regional Development, Communications and the Arts, Submission 7, p. 3; EuropeanCommission, The Digital Services Act package, https://digital-strategy.ec.europa.eu/ en/policies/digital-services-act-package (accessed 2 March 2023).

[6]European Commission, Communication from the Commission to the European Parliament, the Council, the European Economic and Social Committee and the Committee of the Regions: Tackling online disinformation: a European approach, 24 April 2018, pp. 1–16.

[7]European Commission, Action Plan against Disinformation, 5 December 2018.

[8]Australian Communications and Media Authority, Submission 6, p. 3; European Commission, Tackling online disinformation, https://digital-strategy.ec.europa.eu/en/policies/online-disinformation (accessed 1 March 2023).

[9]European Commission, Signatories of the Code of Practice on Disinformation deliver their first baseline reports in the Transparency Centre, https://digital-strategy.ec.europa.eu/en/news/signatories-code-practice-disinformation-deliver-their-first-baseline-reports-transparency-centre(accessed 1March2023).

[11]EUvsDisInfo, About, https://euvsdisinfo.eu/about/ (accessed 7 July 2023).

[12]Carnegie Endowment for International Peace, Partnership for Countering Influence Operations, https://carnegieendowment.org/specialprojects/counteringinfluenceoperations/(accessed 15May2023).

[13]European Union External Action Service (EEAS), 1st EEAS Report on Foreign Information Manipulation and Interference Threats, p. 5.

[14]EEAS, 1st EEAS Report on Foreign Information Manipulation and Interference Threats, p. 7.

[15]EEAS, 1st EEAS Report on Foreign Information Manipulation and Interference Threats, p. 4.

[16]EEAS, 1st EEAS Report on Foreign Information Manipulation and Interference Threats, p. 26.

[17]EEAS, 1st EEAS Report on Foreign Information Manipulation and Interference Threats, p. 26.

[18]EEAS, 1st EEAS Report on Foreign Information Manipulation and Interference Threats, p. 24.

[19]EEAS, 1st EEAS Report on Foreign Information Manipulation and Interference Threats, p. 26.

[20]EEAS, 1st EEAS Report on Foreign Information Manipulation and Interference Threats, p. 26.

[21]EEAS, 1st EEAS Report on Foreign Information Manipulation and Interference Threats, p. 26.

[22]EEAS, 1st EEAS Report on Foreign Information Manipulation and Interference Threats, p. 26.

[23]EEAS, 1st EEAS Report on Foreign Information Manipulation and Interference Threats, p. 30.

[24]EEAS, 1st EEAS Report on Foreign Information Manipulation and Interference Threats, p. 30.

[26]Clothilde Gioujard, 'EU to launch platform to fight Russian, Chinese disinformation', Politico, 7February 2023.

[28]TikTok, Setting a new standard in European data security with Project Clover, 8 March 2023, https://newsroom.tiktok.com/en-ie/project-clover-ireland (accessed 16 July 2023).

[29]TikTok, Setting a new standard in European data security with Project Clover.

[30]US EU Joint Statement of the Trade and Technology Council, Annex 3, 29 May 2023.

[31]Director of National Intelligence (US), The National Counterterrorism Center, Foreign Malign Influence Center Fact Sheet, www.dni.gov/files/FMIC/documents/FMIC_Fact_Sheet.pdf(accessed16July2023).

[32]US Department of State, About Us – Global Engagement Center, www.state.gov/about-us-global-engagement-center-2/ (accessed 16 July 2023).

[33]Peter Jeydel and Brian Egan, 'An "IEEPA-Free Zone" for TikTok and other Chinese Mobile Applications?', American Society of International Law, 8 February 20221, Vol. 25 Issue 2.

[34]See: Cyber Risk GmbH, The Restrict Act, https://www.restrict-act.com/ (accessed 26 July 2023).

[35]Johana Bhuiyan, 'Why did the US just ban TikTok from government-issued cellphones?', The Guardian, 31 December 2022; Alex Hern, 'Canada bans TikTok on government devices over security risks', The Guardian, 1 March 2023.

[37]UK Home Office, 'Ministerial Taskforce meets to tackle state threats to UK democracy', Media release, 28 November 2022.

[38]UK Government, 'Ministerial Taskforce meets to tackle state threats to UK democracy', Media release,28 November 2023.

[39]Cabinet Office (UK), Fact Sheet on the CDU and RRU [Rapid Response Unit], 9 June 2023, www.gov.uk/government/news/fact-sheet-on-the-cdu-and-rru (accessed 15 July 2023).

[40]Cabinet Office (UK), 'TikTok banned on UK government devices as part of wider app review', Mediarelease, 16 March 2023.

[41]Home Office (UK), National Security Act 2023, www.gov.uk/government/collections/the-national-security-bill (accessed 15 July 2023).

[42]Department for Digital, Culture, Media and Sport UK, 'Internet safety laws strengthened to fight Russian and hostile state disinformation', Press release, 5 July 2022; UK Parliament, Online Safety Bill, https://bills.parliament.uk/bills/3137 (accessed 26 July 2023).

[43]Steven Chase, 'A timeline of China's alleged interference in recent Canadian elections', The Globe and Mail, 9 March 2023 (accessed 26 July 2023).

[44]Canadian Security Intelligence Service, CSIS Public Report 2021, March 2022, p. 18.

[45]Public Safety Canada, Enhancing Foreign Influence Transparency: Exploring Measures to Strengthen Canada's Approach, www.publicsafety.gc.ca/cnt/rsrcs/pblctns/2023-nhncng-frgn-nfluence/index-en.aspx (accessed 7 July 2023)

[46]'Ottawa considering appointing a national coordinator to counter foreign interference, minister says', CBC News, (accessed 7 July 2023).

[50]Director-General of Security, New Zealand Security Intelligence Service and Acting Director-General of the Government Communications Security Bureau, Director-General remarks: Justice Committee Inquiry into the 2020 General Election and Referendums, www.nzsis.govt.nz/news/director-general-remarks-justice-committee-inquiry-into-the-2020-general-election-and-referendums/ (accessed 7 July 2023).

[52]Danielle Cave and Dr Jacob Wallis, Australian Strategic Policy Institute, 'Cyber-enabled foreign interference', Strategic Insights, November 2022, pp. 21–22.

[53]Dr William Stoltz, Private capacity, Committee Hansard, 20 April 2023, p. 40.

[54]Ms Mia Garlick, Regional Director of Policy, Meta, Committee Hansard, 11 July 2023, p. 9.

[56]Law Council of Australia, Submission 18, p. 35.

[57]Government of the United Kingdom, SHAREChecklist, 2020, https://sharechecklist.gov.uk/ (accessed 3 March 2022); Australasian Cyber Law Institute, Submission 41, p. 20.

[58]Law Council of Australia, Submission 18, p. 48.

[59]Law Council of Australia, Submission 18, p. 46.

[60]Department of Home Affairs, Submission 16, p. 8.

[61]Australasian Cyber Law Institute, Submission 41, p. 23.

[62]See Law Society of New South Wales: Young Lawyers, Submission 11, p.10; and University of New South Wales Law Society, Submission 37, p. 28.