Chapter 2 - Foreign interference in Australia

Chapter 2Foreign interference in Australia

Foreign interference in Australia

2.1Just as foreign interference is a key security concern of nations elsewhere, it is a key concern in Australia. The Department of Home Affairs (Home Affairs) submitted that:

Foreign interference and espionage is the principal security concern facing Australia. Foreign interference and espionage threaten the things that we value most about our country: our social cohesion, our trusted democracy, and our freedoms. Australia remains a target of sophisticated and persistent foreign interference activities directed towards government, academia, industry, the media and our communities. The nature and scale of the threat we face is complex, and undermines our sovereignty, values and national interests.[1]

2.2The following sections will outline the four key areas of foreign interference that are of concern in Australia, two related to the pull of information and two related to the push of information. The fears are that social media platforms and applications (apps) can be used to:

gather intelligence on individuals that can be used to target them;

gather behavioural data by population or cohort that can be used to improve interference and influence campaigns;

harass Australia's diaspora communities; and

spread misinformation or disinformation with the intent to:

achieve a specific outcome within Australia's political system;

manipulate community discourse and understanding about an issue; or

spread chaos and seed distrust in Australia's democratic systems.

2.3Chapter 1 outlined that foreign interference via social media can be part of a wider campaign of interference. This has also been experienced in Australia. Forexample, Mrs Kateryna Argyrou, Co-Chair of the Australian Federation of Ukrainian Organisations, spoke of Russian interference tactics being used against the Ukrainian diaspora community in Australia as always evolving and coming from all angles, using social media to support real-world interference. MrsArgyrou described the formation of support groups ostensibly to support Ukrainians, but which were in fact established by Russian background people who used the groups to gather data on Ukrainians residing in Australia.[2]

2.4Badiucao, a Chinese-born Australian artist who is a well-known dissident against the Chinese Communist Party (CCP), agreed that social media-based interference is often just one of the tactics in a wider interference campaign. Hetold the committee:

Recently, I also find they are being bolder and aggressive. In February this year, I actually got approached by a person who claimed to be a journalist of Reuters. Eventually, I found out that this is not the person he claimed to be. When I verified with real Reuters journalists, we found out this is not the person. It turned out there are three different persons' accounts from Telegram channelled into Twitter, pretending to be the entire China bureau of Reuters. They are using this fake identity to trick dissidents and the Chinese diaspora outside of China, including inside China, to send sensitive information about human rights campaigns and other political opinion.[3]

Gathering intelligence on individuals

2.5Prior to the prevalence of social media, human intelligence (HUMINT) was painstakingly gathered by 'travel, articles, public events, and old fashioned boots on the ground surveillance'. However, social media has now become the primary source of HUMINT, given the numbers of people posting intimate details of their public and professional lives:

59 per cent of people post names or photos of children.

27 per cent of people post names or photos of their partner.

93 per cent of people post employment updates.

36 per cent of people post information about their company job, boss, colleagues.

32 per cent of people post updates and photos during business trips.

Around 55 per cent of people do not have privacy settings activated on their social media.[4]

2.6This issue of concern is outlined in greater detail later in this chapter, when discussing interference against specific members of diaspora communities in Australia, and concerns related to the professional networking site, LinkedIn, is discussed in Chapter 6.

Gathering behavioural data

2.7A key concern for the mass gathering of behavioural data via apps such as TikTok, is that it enables microtargeting of advertising and influence campaigns, which can then be used to engage in foreign interference.

2.8Internet 2.0 submitted that big data is one of China's stated primary fields for strategic competition, and that China sees big data as 'a new opportunity to reshape its competitive advantage'. Internet 2.0 further noted that '[e]ffective disinformation campaigns also rely on high quality data to have insight into voting patterns.[5]

2.9This issue of concern is discussed in greater detail in Chapter 6, Platforms.

Harassment of diaspora communities in Australia

2.10Social media platforms are being used to facilitate the transnational repression of individuals and marginalised communities in Australia. The Australian Strategic Policy Institute (ASPI) observed that:

This poses a significant threat to the freedom of Australians, and others residing in Australia, to express their opinion and access online spaces … On social media, this includes online trolling, stalking or harassment, and is typically conducted by authoritarian states to coerce their citizens and others abroad.[6]

2.11Four diaspora communities in Australia are currently known to be subject to significant surveillance and interference by foreign state actors: Iranians, Chinese, Tibetans and Ukrainians.

2.12The impact of this foreign interference on the lives of individuals cannot be overstated. Ms Vicky Xu, a journalist and policy analyst told the committee that:

… I'm still dealing with death threats. I'm still dealing with repeated hacking attempts. Just this week I received a dozen hacking attempts across all of my accounts … I've had to adapt the way I live, my lifestyle, to one that's akin to a criminal, I would say. People in democracies, politicians, academics and people with good social standing tell me things like I'm going to end up in history books, and that all sounds grand, but what about life? I'll just leave it at that.[7]

Iran

2.13There have been many recent media reports of harassment or surveillance of Iranians in Australia, specifically targeting dissidents who speak out against the current Iranian regime.

2.14In one case, the mother of a leading Iranian-Australian protester was jailed in Tehran and interrogated about her Australian relatives in what her family and experts fear is part of a wider attempt to silence Australia's Iranian prodemocracy protesters. Iranian-Australians say they do not feel safe from the Iranian regime even inside Australia's borders. Several activists said local protests had sometimes turned violent after people suspected of being linked to the Iranian regime arrived and filmed them.[8]

2.15The Minister for Home Affairs and Minister for Cyber Security advised the Australian public in February 2023 that ASIO ‘disrupted the activities of individuals who had conducted surveillance in the home of an IranianAustralian’.[9]

2.16Mr Peter Murphy, Co-secretary of Australian Supporters of Democracy in Iran, told the committee that 'it's very, very clear that in Iran the state controls the mainstream media and has also invested heavily in cyberwarfare, cybersurveillance and creating its own messaging apps to enable it to monitor dissident opinions and communicate its own "information"'.[10]

2.17Mr Murphy further told the committee:

The basics of our [Australia's] democratic society include freedom of association, freedom of speech and freedom of expression, and this family in Melbourne has been told to stop talking, stop attending certain meetings and stop expressing certain views. It is really a blatant attack on our basic rights by a foreign government.[11]

2.18The Australian Supporters of Democracy in Iran submitted that most cybersecurity efforts in Australia do not specifically relate to political interference through social media. They argued that 'this is an area of policy and governance which is underdeveloped, including in Australia'.[12]

2.19A recent expert study commission by the European Parliament noted that 'Iran is among the actors who focus on "laundering disinformation" through seemingly legitimate front and proxy media' and further argued that these 'operations will remain untouched by measures targeting fake accounts or inauthentic behaviour'. The study recommended a dedicated taskforce, factcheckers and sanctions against malicious actors, but noted that 'sanctions would have to be proceeded by appropriate investigations in order to identify those who should be sanctioned'.[13]

Senate inquiry: Human Rights implications of recent violence in Iran

2.20The Senate Foreign Affairs, Defence and Trade references committee (FADT committee) recently tabled a report into politically motivated violence in Iran. This inquiry received a significant volume of evidence from effected individuals, with a total of nearly 400 submissions. The committee expressed concerns at 'reports of threats and intimidation against Australian residents and citizens' and noted the 'high level of concern within the Iranian Australian diaspora about potential monitoring of individuals on social media, as well as at rallies and protests held in Australia'.[14]

2.21The FADT committee further noted that Home Affairs and the AustralianFederal Police advised that reports of such intimidation and threats would be investigated where it reached a criminal threshold. The FADT committee made three recommendations on this issue:

that the responsible Ministers provide an update to the Parliament and the Australian public on the government's current assessment of whether persons connected to the Islamic Republic of Iran (IRI) regime are undertaking such behaviour in Australia;

that the Australian Government ensure there is an appropriate level of expertise and resourcing in the relevant government departments and agencies, including foreign language speakers and community liaison officers, available to quickly investigate and assess threats against Australians; and

that all reports of threats, intimidation, monitoring or surveillance by the Iranian community in Australia are followed up, recorded, assessed and reported to the lead coordination agency (regardless of whether individual reports result in a criminal investigation) to ensure that the government has a complete picture of foreign interference efforts by the IRI in Australia. The committee further recommended that the relevant agencies report to Parliament through the Joint Committee for Intelligence and Security on such activities.[15]

Russia

2.22The committee was told about Russian-backed influence and interference campaigns are occurring in Australia. Mrs Kateryna Argyrou of theAustralian Federation of Ukrainian Organisations, told the committee 'we have many members of our community that have been targeted by either the Russian community or the Russian consulate here'. Mrs Argyrou said the targeting was:

… quite varied, anything from direct death threats through social media, mostly coming through Telegram and Facebook where certain people, especially community leaders, are targeted in the most vile way. They basically describe how they would be executed, beheaded, where their head is going to end up and so on and so forth. It's very confronting.[16]

2.23Mrs Argyrou further stated that community concerns around COVID-19 have been used as a tactic for influence campaigns:

They've found a very innovative and interesting way to do that, where they target different subgroups in Australia to try to influence opinion and perception. For example, through … the Russian consulate and Russian agents in Australia they have been very successful in infiltrating the antivax movement. First, they learn about the ideals and aims of a certain movement. They share those ideals and views, and they quickly become leaders of that and through that they place typical Russian narratives and propaganda. They have been very successful in infiltrating that antivax movement here in Australia, and that's just one example.[17]

China

2.24The committee received a great deal of evidence of the CCP engaging in interference in Australia, both via social media and in real-world events.

2.25Badiucao, a Chinese-born Australian dissident artist told the committee:

I have been receiving death threats almost on a daily basis. That has just been a burden on me every day. The Chinese government know that the only way that they can stop dissidents overseas from talking about the important human rights issues is by launching character assassination campaigns.[18]

2.26Badiucao further told the committee with regard to WeChat:

… WeChat has become this very toxic headquarters for misinformation and propaganda for the Chinese government, but at the same time for users, no matter whether you’re inside China or outside China in Australia, the rate for free speech to survive is zero.[19]

2.27Ms Vicky Xu, a journalist and policy analyst told the committee that:

… the Chinese government has been attempting to silence, intimidate and harass me … Because of my reporting on [Hong Kong protests]—whether that's orchestrated by someone or whether that's just voluntary sort of anger from the people, I don't know—a large online social harassment campaign occurred on the platform called Weibo. It had such an impact on my family life that I became estranged from my father.[20]

2.28Ms Xu told the committee that social media played a large role in this character assassination 'because there are hundreds of accounts pumping out content that goes across the entirety of the Chinese internet, so from WeChat and Weibo to the Chinese version of Quora, which is Zhihu, and to the video platforms'.[21]

2.29Dr Seth Kaplan noted the dangers that Chinese social media platforms and apps pose to the Australian-Chinese community:

… the Chinese diaspora in Australia, the Chinese Australians, are a vulnerable community because of how their information is being managed, and that information prevents many voices from being heard. It prevents many civic activities or civil society organisations forming and/or developing. It creates tensions within the community because only certain voices, which could be more extreme on the left or right or could simply be saying very negative [things].[22]

Tibet

2.30Mr Kalsang Tsering, President of the Australian Tibetan Community Association (Tibetan Association), told the committee of the plight of TibetanAustralians being monitored by the CCP in Australia:

There are about 3,000 Tibetans all over Australia. Tibetan communities have been persecuted, and we are not able to freely contact our families back in Tibet. Most of the Tibetans in Australia are former political prisoners; or their parents or siblings are former political prisoners. Almost everyone, 90per cent of us, have families back in Tibet, and we cannot freely communicate with them.[23]

2.31The Tibetan Association submitted a range of alleged human rights abuses by the CCP against Tibetan-heritage people in Australia, including instances of surveillance, interference, and repression of freedom of expression, association, and the right to protest. The association also claimed that bots are used to spread CCP information, misinformation and disinformation, disseminate disunity, as well as to attack posts in support of Tibet.

2.32The Tibet Association claimed that some of these actions have been taken through social media applications, including WhatsApp, Twitter, Facebook, and WeChat (used by Tibetans and Chinese overseas and in Australia) and through phone calls and messages.

2.33The Tibetan Association discussed the action taken against Tibetans and their families including harassment, fear, intimidation, death threats and actions against family in Tibet. It also claimed that Tibetan community members in Australia have become increasingly suspicious and paranoid of each other as a result of this work. Mr Tsering stressed this impact directly to the committee, stating:

It is a very big thing for us to communicate freely, because we were born in Tibet and came to Australia as a second home. We want to speak about the atrocities and the human rights violations in Tibet, but we cannot speak freely. Still, we feel that it's our duty, in a free country like Australia, to speak about what is happening inside Tibet. As we do that, the Chinese government even tries to go through our phones or laptops and tries to get information about us, and then they try to persecute, question or detain our families back home.[24]

China—global concerns

2.34Information considered by the committee strongly suggests that the Australian examples above are not happening in isolation, and are not only happening to diaspora communities. They are occurring as part of a global interference and influence campaign by the CCP, that has been decades in the making and has the potential to inflict seismic shifts in geopolitical stability.

2.35A recent joint address was made by the heads of Security Service MI5(United Kingdom) and the Federal Bureau of Investigation (United States). It was the first time the heads of those agencies had shared a public platform, and they came together 'to send the clearest signal we can on a massive, shared challenge: China'. The address made the following key points:

The CCP is covertly applying pressure across the globe. Rather than lone actors it is a coordinated campaign on a grand scale, and involves planned, professional activity to conduct a strategic contest across decades.

The CCP adopts a whole-of-state approach in which businesses and individuals are forced by law to cooperate with the Party.

President Xi said that in areas of core technology where it would otherwise be impossible for China to catch up with the West by 2050, they 'must research asymmetrical steps to catch up and overtake'.

The CCP doesn't just use intelligence officers posing as diplomats in the classic fashion. Privileged information is gathered on multiple channels, in what is sometimes referred to as the 'thousand grains of sand' strategy.

The Chinese intelligence services, or bodies within the CCP itself—such as its United Front Work Department—are mounting patient, wellfunded, deceptive campaigns to buy and exert influence.

Countering state threats needs a profound whole-of-system response, bringing together not just the national security community but counterparts in economic and social policy, in industry, in academia.[25]

2.36In its February 2023 report on foreign interference, the European Union External Action Service identified the CCP as a:

… multifaceted FIMI [foreign information manipulation and interference] actor with an arsenal that is diverse and includes various tactics. Its activities range from benign – public diplomacy – to clearly illegitimate – intimidation and harassment of critical voices with the aim of suppressing information also outside of its borders.[26]

2.37The report outlined that China has been seen using its own statecontrolled media and economic leverage over other media outlets to influence media coverage in its own interests, while simultaneously suppressing critical stories using 'a wide range of often covert tactics, including intimidating and harassing individuals, also targeting overseas Chinese communities'.[27]

2.38An ASPI report into cyber-enabled foreign interference found that:

The Chinese Communist Party (CCP) has invested US$6.6 billion into its global media presence since 2009. It has run covert information operations on Silicon Valley social media platforms since at least 2017. The party invests in global data collection on a massive scale, and its external propaganda is increasingly precisely targeted to granular international audience segments.[28]

2.39The ASPI report pointed to statements by Chinese President Xi Jinping that clearly outlined this intent and noted that 'the CCP is strategically and deliberately building an international-facing propaganda system that's designed to reshape the international order'.[29]

2.40Ms Lindsay Gorman of the Alliance for Securing Democracy noted the increasing use of social media by the CCP to achieve these strategic objectives. Ms Gorman told the committee:

… over the last five years, PRC [People's Republic of China] diplomats and state media outlets have flocked to social media to spread China's message to the world, increase its discourse power, denigrate the United States and the international world order, and position itself as an alternative democracy.[30]

2.41Mr Fergus Ryan of ASPI similarly told the committee:

The Chinese Communist Party and propaganda officials see a unique opportunity in the world at the moment, where the internet has disrupted traditional media. They see social media as an outlet that they can use to close the gap when it comes to China's soft power versus the West's soft power. These activities have been taking place on Facebook, on Twitter and on YouTube, and, as you said, there have been efforts to identify that activity that's taking place.[31]

2.42Internet 2.0 described how authoritarian regimes, including China, 'know the value of the data, and have moved to collect it on us and limit our access to it in their social media domain' and 'conduct mass surveillance through social media within their own borders'. Internet 2.0 went further to state that applications such regimes build are purpose-built for surveillance and when they 'export these business models to our social media environments the legacy of this surveillance culture permeates'.[32]

2.43Ms Yaqiu Wang, a Senior China Researcher with Human Rights Watch told the committee that China broadens its intelligence gathering reach by coercing private companies and using them as tools for political purposes:

The CCP has a record of compelling domestic and foreign companies to toe the party line and of punishing those who fail to sufficiently do so. It has forcefully disappeared a score of business executives under murky circumstances, a practice that has only increased in recent years under President Xi Jinping… this sends a clear message to business leaders in China that the price of opposing, or even appearing to oppose, the CCP can be extraordinarily steep.[33]

2.44Ms Shanthi Kalathil, a former advisor on the United States (US) NationalSecurity Council, told the committee of relevant Chinese laws:

There are a number of data laws that are relevant to this issue, including the 2015 National Security Law, the 2017 National Intelligence Law, the 2017 national Cybersecurity Law and the Data Security Law. The lack of a meaningful barrier between private enterprise and the demands of the party state is less a technical or legal issue than a political issue.[34]

2.45Meta also provided commentary on changes in approach and an overall escalation of coordinated inauthentic behaviour (CIB) originating from China:

… we have seen quite a shift in tactics and approach by China based actors over the past seven months or so. Fifty per cent of the China-originating CIB networks we have actioned in the last four years we've taken down in the last seven months. We are seeing a whole range of new tactics evolving, such as operations that are linked to troll farms; attempts to co-opt journalists, NGOs [non-government organisations] or other respected third parties; and attempts to work through PR [public relations] firms.[35]

Case study: Mr Kenny Chiu

2.46According to an independent thinktank, the Council on Foreign Relations, China has pursued a pattern of influence operations in the Pacific Rim to shift narratives toward China's points of view, promote pro-China politicians, or sometimes just to sow chaos and falsehood. US law enforcement authorities warned about such interference in the 2022 midterm elections and both Google and Meta's cybersecurity arms have warned of Chinese online interference in the midterms.[36]

2.47Raising similar concerns about north American election meddling, CanadianSecurity Intelligence Service documents revealed that China had employed a sophisticated strategy to defeat Conservative Party politicians considered to be unfriendly to Beijing in the 2021 federal election in Canada. The tactics included disinformation campaigns, undeclared cash donations and the use of international Chinese students studying in Canada, as campaign volunteers to support preferred Liberal Party candidates.[37]

2.48The committee heard directly from Mr Kenny Chiu, a former Canadian member of parliament who said China's alleged election meddling is the reason he lost his seat in the 2021 Canadian federal election.

2.49In 2021, Mr Chui introduced a private members bill which proposed a ForeignInfluence Registry to require 'individuals acting on behalf of a foreign principal to file a return when they undertake specific actions with respect to public office holders'. The purpose of the bill became the target of a misinformation campaign. Mr Chiu also had a background in parliament of advocating for HongKong and democracy, and criticising Beijing's violations of human rights, as well as urging the Canadian government to impose sanctions on China.[38]

2.50Mr Chiu reported that during the 2021 federal campaign, he discovered WeChat stories circulating which claimed that his private members bill would put Chinese-Canadians in danger. Mr Chui alleged efforts to oust candidates seen as unfriendly to Beijing, and stated he was targeted by preelection disinformation on Chinese-language social media.[39]

2.51Mr Chiu told the committee:

It started before the election, and during the election we started seeing authoritative articles being written and circulated among the WeChat community which both attacked my then leader, Mr Erin O'Toole, and specifically also used my private member's bill to attack me. The articles claimed that we were anti-Chinese, that we were Chinese haters and that therefore we must be opposed. They said that should I get re-elected again, my bill would somehow automatically be passed, and all Chinese in Canada would be subject to unimaginable Chinese exclusion and persecution. This disinformation was activated soon after the election started and came with a ferocity that, frankly speaking, was not what I had anticipated or expected, so we were caught unprepared.[40]

2.52Media reporting also appears to suggest the interference campaign went unnoticed by the body established by the Canadian government to 'monitor threats to federal elections'. The Security and Intelligence Threats to Elections Taskforce is reported to have issued no public warnings about any campaigns and advised the Prime Minister that it had found no evidence of interference, reporting to parliament that it had 'determined that the integrity of our elections was not compromised in 2019 or 2021'.[41] However, it appears that since 2021 foreign interference threats to democratic elections in Canada have only grown.[42]

Interference in Australian politics

2.53Internet 2.0 submitted that authoritarian regimes seek to interfere in elections within liberal democracies not just to promote preferred candidates, but also simply to reduce confidence of the population in the true results of elections. Itstated that by 'conducting divisive campaigns, they seek to divide our societies, to weaken us, and to fracture our uniting values'.[43]

2.54CyberCX agreed, noting that 'disinformation and misinformation need not change the outcome of an election, but merely raise the spectre of illegitimacy'.[44]

2.55During the Australian Security Intelligence Organisation (ASIO) annual threat assessment in 2022, the Director-General advised:

… espionage and foreign interference has supplanted terrorism as our principalsecurityconcern … The threat is pervasive, multifaceted and, if left unchecked, could do serious damage to our sovereignty, values andnational interest.

I can confirm that ASIO recently detected and disrupted a foreign interference plot in the lead-up to an election in Australia.

2.56The Director-General provided details of the attempted foreign interference. Awealthy individual with direct and deep connections to a foreign government and its intelligence agencies hired and financed an employee to identify candidates likely to run in the election who either supported the interests of the foreign government or who were assessed as vulnerable to inducements and cultivation. These candidates were then supported through financial donations and promoted in positive media stories in Australian foreign language media. The purported aim was to exploit successful candidates' sense of indebtedness and to manipulate them to the advantage of the foreign government.

2.57ASIO intervened and harm was avoided. However, ASIO suggested a scenario of possible outcomes from a successful interference of this nature:

Some of the candidates get elected. The puppeteer's employee then recommends they hire certain other associates as political staffers. These people are also agents or proxies of the foreign government, and will try to influence the politician, shape decision-making and help identify other political figures who can be influenced and recruited. Down the track, the new parliamentarians might be asked for information about the party's position on defence policy, human rights, foreign investment or trade. This information will be sent to the foreign power without the knowledge of the parliamentarian. At some point, the politicians might be prevailed upon to vote a particular way on a contentious issue, or lobby colleagues to vote a certain way.[45]

Tactics

2.58The following section outlines some of the key tactics used in foreign interference through social media. Actions being taken to address these are described in Chapter 6, which looks in-depth at social media platforms.

Misinformation and disinformation

2.59Misinformation and disinformation are key concerns in this space, however ASIO noted that they are distinguished from each other on the basis of intent: misinformation is false or misleading content spread due to ignorance or by mistake, while disinformation also involves false or misleading content with an intention to cause harm or deceive.[46]

Box 2.1 Department of Foreign Affairs and Trade, Our Work: Disinformation & Misinformation

The increasing ubiquity of cyberspace and social media platforms as a source of information and for personal engagement on a wide range of social and community issues has made them a key venue for the dissemination and amplification of disinformation and misinformation.

Disinformation may be used as a tool for foreign interference. Foreign actors continue to spread disinformation to serve their own strategic interests and to undermine public trust in democratic institutions and confidence in official messaging, disrupt the proper functioning of open media, or undermine social cohesion.

Recent electoral processes around the world have shown that digital disinformation campaigns have low barriers to entry, and that malicious actors have effectively hijacked public discourse to influence communities and broader public opinion on matters of significant importance. Unwittingly, platforms designed to promote openness have been misappropriated to promulgate and amplify disinformation and misinformation, sow division and mistrust, and ultimately pervert public discourse.

Malicious use of critical technologies is increasingly occurring to amplify these campaigns. This includes 'bots' that drown out legitimate online debate, data-driven technologies that enable malicious or harmful micro-targeting of susceptible and/or influential audiences, and machine learning-enabled 'deep fakes' that spread disinformation.

Disinformation should be differentiated from foreign influence. All governments can seek to influence discussions on issues of importance. When conducted in an open and transparent manner, foreign influence can contribute positively to public debate and form a legitimate part of international engagement.[47]

2.60As outlined above, disinformation can be spread by foreign state actors for the purposes of influencing election outcomes, influencing decision-making, targeting diaspora communities or simply to spread chaos and create dissent.

2.61Australians are most likely to see disinformation on larger digital platforms, like Facebook and Twitter. However, smaller private messaging apps and alternative social media services are also increasingly used to spread disinformation or conspiracies due to their less restrictive content moderation policies.[48] Because foreign interference by social media takes place on commonly used and trusted platforms, it is hard for laypeople to detect when going about their business online.

2.62The following sections discuss how misinformation and disinformation is spread online.

Algorithms

2.63The use of algorithms on social media is ubiquitous. These programs run in the background of platforms like Facebook, Twitter and YouTube, analysing what material a user is interested in and tailoring their social media feed to gather as much of the user's time and attention as possible. While what appears on a social media feed can feel 'natural', these recommender systems and algorithms can actually be used to not only identify people's interests but also manipulate their emotions.

2.64The eSafety Commissioner recently developed a position statement on recommender systems and algorithms which stated that 'while recommender systems and algorithms have positive uses, they can also present an array of risks to users' one of which is to 'amplify harmful and extreme content, particularly when used by platforms whose content feeds are driven by user engagement'. The eSafety Commissioner further noted that this can normalise 'prejudice and hate and distrust in public institutions' and may also 'contribute to radicalisation towards terrorism, violent extremism, and provide users with avenues to find associated groups'.[49]

2.65In a submission to the inquiry held in the 46th Parliament, ResponsibleTechnology Australia described how these algorithms were initially developed to sell advertisements, but now have been found to often have the unintended effect of promoting extreme and inflammatory material:

As the primary aim of these platforms is to maximise user time spent on them (to increase their advertising revenue potential), the algorithms are incentivised to serve material that is calculated to engage users more. This content tends to be more extremist or sensationalist or untrue - as it has been shown to be more captivating. This opens the door for foreign agents to seed inflammatory and sensational content that users engage with out of outrage or support, and is then amplified by the algorithms which see all engagement as warranting amplification - regardless of the nature of the content.[50]

Micro-targeting

2.66While the use of algorithms described above could be described as a 'normal' part of the experience of being on social media platforms, micro-targeting is the weaponisation of the social media environment to further the goals of various actors—corporate, international or even malicious actors. Responsible Technology Australia described the practice as:

Targeted Advertising | The unfettered approach to data collection has amassed history's largest data sets, allowing advertisers to push beyond normal constraints to deliver direct and granular targeting of consumers. This microtargeting often uses key emotional trigger points and personal characteristics to drive outcomes, which malicious actors can easily exploit to sow distrust, fear and polarisation.[51]

2.67The Joint Select Committee on Electoral Matters' Report on the Conduct of the 2016 Federal Election described the phenomenon as 'dark advertising', which 'allows groups and companies to target specific individuals or groups (microtargeting), with the goal of shifting their opinions. It is different from normal advertising because it will be seen by only the intended recipient.'[52]

Bots

2.68A common method of foreign interference and/or influence on social media is the use of 'bots'. Bots (short for 'robots') are artificial social media accounts that mimic the behaviour of real users, spreading misinformation or otherwise enacting whatever function they have been designed to carry out. Bots are:

… algorithmically-driven computer programmes designed to carry out specific tasks online, such as analysing and scraping data. Some are created for political purposes, such as automatically posting content, increasing follower numbers, supporting political campaigns, or spreading misinformation and disinformation.[53]

2.69Bots can be difficult to identify and remove, and are sometimes sufficiently 'intelligent' to interact with accounts operated by real people. These bots can be used to spread rumours, promote individuals and otherwise rapidly spread misinformation and disinformation online.

2.70A related online phenomenon is a 'rumour cascade', which bots can assist in propagating. A rumour cascade begins when a singular user makes an assertion, leading to other propagating the rumour and retweeting it. Researchers have found that 'false news spreads more pervasively than the truth online', which is still mainly perpetrated by human users, as opposed to bots.[54]

Troll farms

2.71Troll farms (or factories) create multiple fake social media profiles to conduct disinformation propaganda activities on the Internet, usually focused on the political or economic sphere, for example, attacking political opponents or other action indicated by the ordering party. Troll farms achieve their goals using, among other things, fake news and hate speech.[55]

2.72The committee was told of that in April 2023, US prosecutors charged 34 officers of China's Ministry of Public Security with operating a troll farm to attack Chinese dissidents, sow division and disseminate disinformation. Thecommittee heard that prosecutors 'allege that the officers were part of an elite task force called the 912 Special Project Working Group and that the group created thousands of fake profiles on social media sites, including Twitter, and spread propaganda about topics like human rights in Hong Kong and the Xinjiang region'.[56]

Content farms

2.73Content farms use similar tactics as troll farms, by creating multiple fake 'news' sites that appear to be separate entities but are in fact jointly operated and coordinated. They are profit driven to generate traffic and advertising revenue and usually distribute entertainment-based content, 'articles about politics and current affairs are also common, especially if news events are sensational or high profile'. Content farms can then be 'leveraged by state actors to disseminate propaganda or manipulate how the reporting of events is framed'.[57]

Emerging Tactics

2.74Three new tactics that have the potential to exponentially increase foreign interference through social media are now emerging: the establishment of companies that offer fee-based interference services; the development of artificial intelligence; and, the increasing decentralisation of social media apps.

Foreign Interference as a fee-service

2.75A report by the Oxford Internet Institute in early 2021 highlighted the existence of disinformation-as-a-service. Then in 2023, another report highlighted the emergence of foreign interference-as-a-service.

2.76In early 2021 the Oxford Internet Institute reported that disinformation had become 'a common strategy of cyber manipulation' and reported that of the 'organised social media manipulation campaigns' found to operate in 81countries, 'more than 76 of these campaigns deployed disinformation as part of political communication'. One of the report's authors noted that:

Our 2020 report highlights the way in which government agencies, political parties and private firms continue to use social media to spread political propaganda, polluting the digital information ecosystem and suppressing freedom of speech and freedom of the press. A large part of this activity has become professionalized, with private firms offering disinformation-for-hire services.[58]

2.77The Oxford Internet Institute's researchers found:

Private 'strategic communications' firms are playing an increasing role in spreading computational propaganda, with researchers identifying state actors working with such firms in 48 countries.

Almost $60 million has been spent on firms who use bots and amplification strategies to create the impression of trending political messaging.

Social media has become a major battleground, with firms such as Facebook and Twitter taking steps to combat 'cyber troops', while some $10 million has been spent on social media political advertisements. The platforms removed more than 317 000 accounts and pages from 'cyber troops' actors between January 2019 and November 2020.[59]

2.78In early 2023 a global group of reporters working together uncovered an outfit—referred to in reporting as 'Team Jorge'—that provided hacking and disinformation services and had allegedly actively interfered in the electoral processes of 33elections around the world. The investigation and subsequent reporting on the scale and scope of the group's activities highlighted that foreign-interference-as-a service had well and truly arrived.

2.79An article reporting on the group's activities observed that 'in the past, online disinformation campaigns appeared to work in isolation', but Team Jorge operated differently:

What's so powerful about this example is that they're not just doing the disinformation, they're also involved in 'on the ground' operations. That's a really dangerous combination … As an example: reporters were able to identify a fake protest staged outside a company headquarters in London where masked protesters filmed themselves waving placards – a stunt that was then circulated on social media.[60]

Artificial intelligence

2.80Artificial intelligence (AI) is expected to have an exponential impact on foreign interference and disinformation campaigns. Internet 2.0 submitted:

Artificial intelligence will enable disinformation campaigns as it exponentially increases the computation and distribution of high quality information … The implications of artificial intelligence, if it is not regulated in its use within journalism, elections, and the media, are that our authoritarian adversaries will have the advantage in the information domain.[61]

2.81Ms Shanti Kalathil noted that foreign interference 'is about to take a significant turn into the unknown anyway, with the advent of generative AI and other advances'. Ms Kalathil advised that:

… we are behind the curve in equipping society's populations and educational institutions and other key components of civil society with the knowledge and resilience to pass information in this age. There's no easy solution to this issue, but at a minimum we must be better prepared to confront even more potent forms of information manipulation than what we have already encountered.[62]

2.82Ms Lindsay Gorman of the Alliance for Securing Democracy outlined the dangers that AI presents for foreign interference:

Deepfake images, video and text can enable automated propaganda. The rise of content that looks plausible but is not necessarily true also runs the risk of undermining broader trust in the information environment, which is critical for quality information to flourish in democracies. As democracies contemplate regulation on AI, restoring trust in the information environment and guarding against authoritarian information manipulation should be at the forefront of these conversations.[63]

2.83Mr David Robinson of Internet 2.0 argued that AI 'is increasing the effectiveness of influence and disinformation campaigns against elections' and that when combined with high quality data 'we cannot guarantee we can defend ourselves and we will not be able to reverse the loss of trust our system will suffer'.[64]

2.84Mr Robinson further noted the impact that AI will have on democracy:

Social media is so important in elections because people make decisions based on what the group is saying or doing, and government makes decisions in its normal running based on what it thinks the population wants. [However] a social media platform with AI and botnets can create hundreds of thousands of accounts and talk about the same issue. That influences what you as lawmakers and politicians and what the population do and say to each other. I think AI is very dangerous, and if we don't have laws to regulate how it acts on social media—about talking about elections, for example—I think we're running out of time, to be honest. I think it's the most dangerous thing.[65]

2.85The Australian Signals Directorate similarly advised the committee that AI poses risks through its ability to 'speed up the capacity' of cyber-based attacks.[66] Home Affairs noted that the Department of Industry is the coordinating body for the current project which is 'forming a whole-of-government position on the opportunities and risks associated with AI', with Home Affairs providing the 'national security policy advice informing that decision-making' including by 'partnering with research institutions to understand the policy parameters and techniques that currently exist in perpetrating or proliferating the threat'.[67]

Meta Threads and decentralisation

2.86Launched on 5 July 2023 by Meta, Threads is touted as a decentralised version of Twitter and signed up 30 million users in its first day. A key aspect of decentralisation is the ability for users to host their own instances or servers, giving greater control over their data (including apps Lens Protocol, Mastodon and Steemit). While Threads does not currently offer this option, technology writers note that Meta may incorporate these features in their future.[68] However, even in its current state, Threads is not currently available in the EuropeanUnion because it does not comply with safety regulations.[69]

2.87Threads has been noted to collect more user data than Twitter:

Threads is collecting almost everything it can, including data on your health, purchases, financial info, location, contact info, search history, and browsing history.[70]

eSafety concerns with a decentralised internet

2.88The eSafety Commissioner published a position statement on a decentralised internet with the following summary:

Under a decentralised internet, often referred to as 'DWeb' or 'Web 3.0', users are said to have more power because they can access online services and platforms without relying on a concentration of large technology companies that own or operate mainstream, centralised servers (the computer hardware and software that stores data).

While decentralisation can allow users to protect their information and control their online experiences, it can also make it more difficult to hold users (or the entities behind them) responsible for illegal and harmful content and conduct

2.89The eSafety Commissioner noted some positives of decentralisation:

Decentralisation could improve the security, privacy and autonomy of online users by giving them greater control over their personal information and online experiences.

Decentralisation could also enhance freedom of expression and protect diversity of thoughts and opinions and reduce the risk of monitoring, tracking and targeting of at-risk or marginalised individuals or groups, including whistle-blowers and advocates for social change.

2.90Risks identified by the eSafety Commissioner included:

The absence of centralised servers and lack of central authority, along with the storage and distribution of data across many computers, makes it difficult to moderate or regulate decentralised services and platforms or enforce the removal of illegal and harmful content. A decentralised internet may become a haven for criminal activities and for users who have been removed from mainstream services and platforms.

It would be up to communities on each platform to decide and apply standards, with a risk that unchecked online environments can allow a range of harms from abuses to grow, without providing any way for users to get help or for consequences to be imposed on those responsible.

The inability to enforce standards for conduct and content within a decentralised internet may actually harm freedom of expression instead of improving it.

Using decentralised services may give users more control over their information and online experiences, but it also increases their own responsibility for understanding and operating in unregulated environments and keeping their personal information secure.

2.91The eSafety Commissioner recommended that adopting a Safety by Design approach when developing decentralised platforms can mitigate these risks, by including protections such as:

community moderation;

opt-in governance;

identity verification; and

content moderation.[71]

Footnotes

[1]Department of Home Affairs, Submission 1, p. 3.

[2]Mrs Kateryna Argyrou, Co-Chair, Australian Federation of Ukrainian Organisations, Committee Hansard, 21 April 2023, p. 29.

[3]Badiucao, Private capacity, Committee Hansard, 21 April 2023, p. 33.

[4]Claire Moravec, 'The weaponization of social media', Security Magazine, 14 October 2022.

[5]Internet 2.0, Submission 17, p. 3 and 6.

[6]Australian Strategic Policy Institute, Submission 13, p. 3.

[7]Ms Vicky Xu, Senior Fellow, Australian Strategic Policy Institute, Committee Hansard, 21 April 2023, p. 35.

[9]Minister for Home Affairs and Minister for Cyber Security, Hon Clare O’Neil MP, ‘Foreign Interference in Australia – ANU address’, 14 February 2023.

[10]Mr Peter Murphy, Co-Secretary, Australian Supporters of Democracy in Iran, Committee Hansard, 21April 2023, p. 20.

[11]Mr Peter Murphy, Australian Supporters of Democracy in Iran, Committee Hansard, 21 April 2023, p.22.

[12]Australian Supporters of Democracy in Iran, Submission 23, p. 4.

[13]Directorate General for External Policies of the Union, The misuse of social media platforms and other communication channels by authoritarian regimes: Lessons learned, December 2021, p. 45.

[14]Senate Foreign Affairs, Defence and Trade References Committee, Human Rights implications of recent violence in Iran, p. 49.

[15]Senate Foreign Affairs, Defence and Trade References Committee, Human Rights implications of recent violence in Iran, pp. 38–51.

[16]Mrs Kateryna Argyrou, Australian Federation of Ukrainian Organisations, Committee Hansard, 21April 2023, pp. 27–28.

[17]Mrs Kateryna Argyrou, Australian Federation of Ukrainian Organisations, Committee Hansard, 21April 2023, p. 27.

[18]Badiucao, Private capacity, Committee Hansard, 21 April 2023, p. 33.

[19]Badiucao, Private capacity, Committee Hansard, 21 April 2023, p. 35.

[20]Ms Vicky Xu, Australian Strategic Policy Institute, Committee Hansard, 21 April 2023, p. 34.

[21]Ms Vicky Xu, Australian Strategic Policy Institute, Committee Hansard, 21 April 2023, p. 34.

[22]Dr Seth Kaplan, Private capacity, Committee Hansard, 20 April 2023, p. 15.

[23]Mr Kalsang Tsering, President, Australian Tibetan Community Association, Committee Hansard, 21April 2023, p. 24.

[24]Mr Kalsang Tsering, Australian Tibetan Community Association, Committee Hansard, 21April 2023, p. 24.

[25]Security Service MI5, Joint Address by MI5 and FBI Heads, 6 July 2022, www.mi5.gov.uk/news/speech-by-mi5-and-fbi (accessed 14 July 2023)

[26]European Union External Action Service, 1st EEAS Report on Foreign Information Manipulation and Interference Threats, February 2023, p. 10.

[27]European Union External Action Service, 1st EEAS Report on Foreign Information Manipulation and Interference Threats, p. 10.

[28]Ms Danielle Cave and Dr Jacob Wallis, 'Cyber-enabled foreign interference', Strategic Insights, November 2022, p. 20.

[29]Ms Danielle Cave and Dr Jacob Wallis, 'Cyber-enabled foreign interference', Strategic Insights, November 2022, pp. 20–21.

[30]Ms Lindsay Gorman, Senior Fellow for Emerging Technologies, Alliance for Securing Democracy, German Marshall Fund, Committee Hansard, 21 April 2023, p. 8.

[31]Mr Fergus Ryan, Analyst, Australian Strategic Policy Institute, Committee Hansard, 20 April 2023, p.20.

[32]Internet 2.0, Submission 17, p. 5.

[33]Ms Yaqiu Wang, Senior China Researcher, Human Rights Watch, Committee Hansard, 21 April 2023, p. 1.

[34]Ms Shanthi Kalathil, Private capacity, Committee Hansard, 20 April 2023, p. 2.

[35]Mr Josh Machin, Head of Public Policy, Australia, Meta, Committee Hansard, 11 July 2023, p. 3.

[36]Joshua Kurlantzick, 'China's Growing Interference in Domestic Politics: Globally and in the United States', Council on Foreign Relations, 1 November 2022.

[37]Robert Fife and Steven Chase, 'CSIS documents reveal Chinese strategy to influence Canada's 2021 election', The Globe and Mail, 22 February 2023.

[38]Sze-Fung Lee and Benjamin Fung, 'Misinformation and Chinese interference in Canada's affairs', Policy Options, 4 January 2022.

[40]Mr Kenny Chiu, Private capacity, Committee Hansard, 21 April 2023, p. 15.

[41]Robert Fife and Steven Chase, 'CSIS documents reveal Chinese strategy to influence Canada's 2021 election', The Globe and Mail, 22 February 2023.

[42]See, for example: Steven Chase, 'A timeline of China's alleged interference in recent Canadian elections', The Globe and Mail, 9 March 2023 (accessed 26 July 2023).

[43]Internet 2.0, Submission 17, p. 6.

[44]CyberCX, Submission 16, p. 2.

[45]Australian Security Intelligence Organisation, Director-General's Annual Threat Assessment, 9February 2022.

[46]Australian Security Intelligence Organisation, Submission 2, p. 3.

[47]Department of Foreign Affairs and Trade, Our Work: Disinformation & Misinformation,www.internationalcybertech.gov.au/our-work/security/disinformation-misinformation(accessed 17 January 2023.

[49]eSafety Commissioner, Submission 10, p. 1.

[50]Responsible Technology Australia, Submission 17: Foreign Interference through Social Media inquiry (46th Parliament), p. 2.

[51]Responsible Technology Australia, Submission 17: Foreign Interference through Social Media inquiry (46th Parliament), p. 2.

[52]Joint Standing Committee on Electoral Matters, Report on the conduct of the 2016 federal election and matters related thereto, November 2018, p. 176.

[53]House of Commons (United Kingdom), Digital, Culture, Media and Sport Committee, Disinformation and 'fake news': Interim report, 29 July 2018, p. 19.

[54]Soroush Vosoughi, Deb Roy and Sinan Aral, 'The spread of true and false news online', Science, 9March2018, Volume 359: 6380, pp. 1146–1151.

[55]North Atlantic Treaty Organization (NATO), Media – (Dis)Information – Security, pp. 1–2.

[56]Ms Shanthi Kalathil, Private capacity, Committee Hansard, 20 April 2023, p. 2.

[57]Dr Wallis, Bogle, Zhang, Mansour, Niven, Ho, Liu, Dr Ong, and DrTapsell, 'Influence for hire', Strategic Insights, Policy Brief Report No. 48/2021, 10 August 2021, p. 17.

[59]Oxford Internet Institute, 'Social media manipulation by political actors now an industrial scale problem prevalent in over 80 countries – annual Oxford report', Media release, 13 January 2021.

[60]Archie Bland, 'Thursday briefing: The secret disinformation group who claim to influence elections worldwide', The Guardian, 16 February 2023 (accessed 26 July 2023).

[61]Internet 2.0, Submission 17, p. 6.

[62]Ms Shanthi Kalathil, Private capacity, Committee Hansard, 20 April 2023, p. 2.

[63]Ms Lindsay Gorman, Alliance for Securing Democracy, Committee Hansard, 21 April 2023, p. 9.

[64]Mr David Robinson, Director, Internet 2.0, Committee Hansard, 20 April 2023, p. 31.

[65]Mr David Robinson, Internet 2.0, Committee Hansard, 20 April 2023, p. 35.

[66]Ms Abigail Bradshaw, Deputy Director-General, Australian Signals Directorate, Committee Hansard, 12 July 2023, p. 14

[67]Mr Peter Anstee, Acting First Assistant Secretary, Department of Home Affairs, Committee Hansard, 12 July 2023, p. 15.

[68]Jojo Percs, 'How decentralized is Threads? Not at all for now', Medium, 8 July 2023, https://medium.com/@jojo_percs/how-decentralized-is-threads-not-at-all-for-now-87279710539e (accessed 10 July 2023).

[70]Stan Schroeder, 'Threads, Meta's Twitter rival, is tracking you in all sorts of ways', Mashable, 6July2023, https://mashable.com/article/threads-tracking-data (accessed 10 July 2023).

[71]eSafety, Decentralisation—position statement, www.esafety.gov.au/industry/tech-trends-and-challenges/decentralisation (accessed 10 July 2023).