Chapter 6 - Algorithmic transparency

Chapter 6Algorithmic transparency

Overview

6.1Algorithms are essential in the digital environment. They facilitate a level of personalisation, helping users navigate the immense volume of online material to discover content of relevance and interest.[1]

6.2The Office of the eSafety Commissioner (eSafety) provided the following overview of the role of algorithms in digital services:

An algorithm is a coded sequence of instructions that is often used by online service providers to prioritise content a user will see.

These instructions are determined by platforms based on many factors, such as user attributes and patterns, and can involve personalised suggestions to achieve a particular goal, such as discovering new artists, friends, products, activities, and ideas, as well as helping business and creators efficiently reach a target audience.

For these reasons, algorithms are used by almost all digital platforms to amplify, prioritise and recommend content and accounts to their users. Their use and sophistication continues to grow, with multiple algorithms typically being active within a platform at any given time, all completing different tasks with different outcomes.[2]

6.3Algorithms are also used by some digital platforms to assist with content moderation, identification of harmful material as well as for targeted advertising.

6.4Automated decision making (ADM) may sometimes use artificial intelligence (AI) technologies but is often guided by rules-based formulas.

Risks from algorithm and ADM use by digital platforms

6.5The committee received concerns that algorithms used by digital platforms do not operate in a way that adequately supports community values, such as fairness, accuracy, privacy and user safety.[3]

6.6At the heart of concerns around the emerging risks and harms is a lack of transparency around the information and user behaviour that influences the algorithmic operation and an algorithm’s intended outcome.

6.7The risks associated with some algorithms, particularly those used to curate social feeds and in search functions, are of growing concern.[4] The Department of Infrastructure, Transport, Regional Development, Communications and the Arts (DITRDCA) noted that the use of algorithms can lead to risks across a number of areas including:

dis- and misinformation

ranking and search algorithms relevant to how news is served to consumers

digital and media literacy initiatives

social harm issues – including echo chambers, online hate speech, and social media feed curation

discoverability of Australian screen and music content on streaming services

general consumer harms, including algorithmic biases or discrimination.[5]

6.8Evidence suggested algorithms have the potential to amplify online harms including radicalisation, cause exposure to material a user would not have sought out, improperly elevate harmful messages or voices through filter bubbles in addition to ad targeting and erosion of privacy.[6]

6.9This chapter explores some of these algorithmic related risks in more detail and considers the role of transparency in mitigating online algorithm harms.

Social harm concerns

Filter bubbles/echo chambers

6.10A filter bubble is a term used to describe the effect of online algorithms and user behaviour resulting in users being presented with material reflecting limited perspectives.

6.11eSafety provided the following definition of an echo chamber or filter bubble:

An echo chamber, also known as a filter bubble, is an environment where a person mostly encounters information or opinions that reflect and reinforce their own. An online echo chamber can develop when a user only follows or interacts with like-minded people, or when recommender systems keep serving content that aligns with the user's search and engagement history.[7]

6.12A range of submissions discussed the risks of algorithms operating in a way that creates filter bubbles or echo chambers.[8]

6.13One of the risks of algorithms amplifying some content is it may deprioritise or exclude viewpoints or ideas contrary to the user’s existing beliefs. The committee was advised that ‘echo chambers can impact a person’s freedom of thought, access to information and autonomy, and can contribute to polarisation’.[9]

6.14Algorithm design elements can encourage, facilitate or intensify risks and harms including engagement in, or exposure to anti-social behaviour. The Alannah & Madeline Foundation advised:

It is harder to attribute this problem directly to design issues, but it seems reasonable to assume the risk is enhanced by features like popularity metrics, which can serve to reward shocking or emotive content; manipulation of user emotions, for example through recommending of extreme material; and 'echo chambers' encouraged by algorithms which sometimes function to normalise anti-social conduct.[10]

6.15eSafety also noted the potential for algorithms to amplify harmful and extreme content. Where platforms aim to optimise user engagement via content feeds, there is a risk that their algorithms will display increasingly shocking or extreme content to consumers. This content draws comments and is in turn considered ‘engaging’ by algorithms, so is thus amplified to an extensive number of users, ‘increasing the content’s reach and potential impact’.[11]

6.16eSafety raised additional risks:

Sometimes these algorithms operate in a way that results in the wide dissemination of content before human and editorial oversight is triggered. They can also artificially promote content that has been deemed ‘engaging’ without balancing other types of content and viewpoints.

On an individual level, these processes can increase the impact experienced by those exposed to harmful material. On a broader societal level, the amplification of content that promotes discriminatory views, such as sexism, misogyny, homophobia, or racism, may have adverse effects, such as normalising prejudice or hate. This may also contribute to radicalisation towards terrorism, violent extremism, and provide users with avenues to find associated groups.[12]

6.17That said, eSafety also highlighted that these issues may be overstated despite some evidence linking algorithms to filter bubbles.[13]

6.18The Australian Competition and Consumer Commission’s (ACCC) Digital Platforms Inquiry Final Report considered evidence both suggesting and disputing the existence of filter bubbles and echo chambers. Focusing on the impacts on the consumption of journalism, it concluded:

Algorithmic curation on digital platforms and user behaviour on social media have the potential to cause ‘echo chamber’ and ‘filter bubble’ effects, although the extent of any harm caused by these effects in Australia is not yet clear.[14]

6.19The ACCC further commented:

The specific effect is likely to depend on the algorithm in operation at the time, and the behaviours and cultures of platform users.[15]

Bias and discrimination (hate speech/online hate)

6.20The committee heard that the use of certain algorithms and ADM also risks creating or exacerbating existing online discrimination, bias and market inequalities.[16]

6.21ADM can make inaccurate and wrong decisions and create barriers for consumer redress.[17] Examples included Airbnb’s social scoring algorithm which is based on personal data, including social media activities, and which can be used to determine a user’s access to the service. Users have no oversight over how their score is created or decisions such as account suspension are made.[18] CHOICE also highlighted Tinder’s use of algorithms to create different charge rates based on a user’s age, geographical location and sexuality.[19]

6.22ADM may also result in ‘an intensification and amplification of pre-existing issues, problems and inequalities, rather than meaningfully changing them’.[20] For example, Digital Rights Watch noted:

Automated decision making systems used in recruitment can exacerbate pre-existing biases, in turn hindering people’s economic opportunities. For example, research has shown that recruitment algorithms favour male applicants.[21]

6.23The Human Rights Law Centre (HRLC) noted gaps in current regulations enable online hate speech to persist, with the burden of identifying, avoiding and responding to harm borne by the affected individuals and communities.[22] It stated:

In Australia, victims of online hate speech have faced difficulty asserting their right to freedom from discrimination. For example, platforms have argued that they are beyond the reach of Australian privacy and antidiscrimination laws due to their corporate structure and incorporation in other countries.[23]

6.24Reset Australia highlighted the limitations of current legislation, noting:

Regulation focuses on individual pieces of content, and overlooks the role of platforms in promoting harmful content to children (via algorithms, for example). Hate speech, mis & disinformation are not adequately addressed in the current framework, but can be harmful.[24]

6.25The Australian Muslim Advocacy Network (AMAN) proposed the introduction of a duty of care on digital platforms ‘to uphold Australian hate speech standards, which may prompt platform investment in compliance units. Currently, they do not invest in this’.[25]

6.26The committee notes algorithms and ADM are also used by platforms for content moderation and safety. Meta advised about its ‘increased use of proactive detection technology to identify and proactively remove and action hate speech’.[26] Ms Mia Garlick, Regional Director of Policy, Meta, explained:

I think this is an example of artificial intelligence and machine learning improving over time. When we first started disclosing our figures in relation to this, we were proactively identifying and removing around 13 per cent of all hate speech that we removed, and that's now well up over 80 per cent. We've certainly been working to make sure that we are able to be a lot more proactive in terms of removing harmful and hateful content on our services to make sure that people are coming to the platform and advertisers are advertising on the platform in connection to content that they find valuable and relevant.[27]

International approaches

6.27The Irish Council for Civil Liberties advised the new European Union (EU) Digital Services Act (DSA) contains relevant provisions important for reducing online hate and hysteria (see Box 6.1 for details of relevant Articles).

Box 6.1 EU Digital Services Act

The Irish Council for Civil Liberties explained:

First, Article 38 compels digital platforms to give people the option to switch off the toxic algorithms that show them personalised material based on their political or philosophical views, or ethnicity or other intimate characteristics. These recommender systems cause hate and division, for the reasons set out at paragraph 7 (a), above [see submission]. The option to switch off a recommender system must be available whenever these algorithm[s] are active.

Second, Article 34 and 35 of the DSA require large digital platforms to assess and mitigate the risks caused by their systems, including risks to civic discourse and public security. This may be effective if we can avoid the platforms turning it in to compliance theatre.

The DSA will be enforceable on large digital platforms from February 2024.

Irish Council for Civil Liberties, Submission 36, pp. 3–4.

6.28AMAN advised the United Kingdom (UK) online safety legislation relies on a user empowerment focus to address online hate and misinformation, which comes with risks:

We agree that user empowerment is important. However, we note the inherent limitations of user empowerment: Communities that are hypersceptical of hate speech controls and more likely to embrace absolutist free speech. They will not use options to remove hate speech and misinformation. This means that targeted ‘outgroups’ will continue to be endangered by dehumanising disinformation operations.[28]

Influence on public debate and democratic processes

6.29Submissions noted the risks of algorithms and ADM to public debate and democratic processes. As discussed above, algorithms may filter out or favour particular information and viewpoints, intentionally or otherwise.

6.30The Australian Broadcasting Corporation (ABC) raised concerns that ‘algorithms applied by search engines and social media (for example Meta's Facebook and Google's YouTube) can influence the news and information that people see, potentially leading to a concentration of power over public discourse and opinion formation’.[29] It further explained:

… well-functioning democracy depends on the free flow of accurate information, objective analysis, and diverse opinions. When an algorithm suggests and automatically plays the next video or recommends social media pages to a user, it is filtering information based on what the user appears to be most interested in. This can push users into a feedback loop and an endless cycle of like-minded content, which can present a particular risk when the content is biased or misleading, or doesn't show information that presents a different point of view.[30]

6.31Vault Cloud advised:

… the flow of information and algorithmic bias creating “echo chambers” or polarisation could potentially shape or manipulate public opinion resulting in herd mentality, which could cause destabilisation.[31]

6.32The committee was advised that aggregated data reflecting consumers’ activities, interests, values, attitudes and needs is increasingly being leveraged by political parties to run campaigns. Further, targeted campaigns to influence user behaviour are permeating into the political sphere ‘as exhibited by the 2016 Facebook-Cambridge Analytica matter’ which:

… raises the concern that attempts by political parties to influence individual behaviour threatens to undermine the integrity of the electoral process by interfering in the political and civic communication that is essential to representative democracy.[32]

Content moderation

6.33In addition to content recommendations, algorithms are used by digital platforms for content moderation and safety processes. The committee received evidence in support of greater transparency around content moderation decisions.

6.34The AMAN noted that ‘platforms use algorithms to prevent and reduce harms by semi-automating the process of flagging, removing, and re-ranking thirdparty contents likely to violate platform policies or laws’.[33] However, it advised:

When this process is performed at scale, the algorithms cannot perform perfectly and are continuously optimized to balance between precision and accuracy.

If a platform prioritizes accuracy over precision in using algorithms for content moderation, its process would have a high false positive rate. Most large platforms therefore choose to prioritize precision over accuracy, which allows most users to post contents but can sometimes lead to extensive harm when false negatives are shared widely.[34]

6.35The AMAN further advised:

… the ability to assess dehumanising information operations with accuracy and precision is more possible than identifying violations of hate speech policies at large, because there is a visible and distinct formula that such operations use in order to dehumanise an outgroup to an ingroup audience.[35]

6.36Digital Rights Watch (DRW) highlighted risks with moderation algorithms and community guidelines used by some platforms. It advised that, while automated systems may be necessary, decisions need to be made about who defines and designs the systems that curate content, and for what purpose. Those decisions have been made by private companies making curation decisions driven exclusively by the desire for profit and growth.[36]

6.37DRW explained the resulting risks included an exportation of United States (US) cultural values via ‘community guidelines’ onto a global scale as the majority of dominant digital platforms are based in the US. The design and implementation of ‘a set of globally homogenous moral standards’ by digital platforms impacts ‘creative, cultural, and educational online expression in places whose norms may not align with those of the United States’.[37] DRW noted:

… artistic expression that includes nudity, as well as sexual education and activism have all been caught up in strict conservative content moderation policies regarding nudity. In 2018, Zuckerberg said it’s “easier to detect a nipple than hate speech with AI.”[38]

6.38Another risk DRW identified was the difficulty in accurately identifying and removing content en masse without a resulting over- or under- capture of particular forms of content:

Automated content moderation on popular social media sites has caused harm to users by disproportionately removing some content over others, penalising Black, Indigenous, fat, and LGBTQ+ people.[39]

6.39Some platforms acknowledge the risks around content moderation. For example, Meta advised that it is a founding member of the Digital Trust and Safety Partnership (DTSP) ‘which is developing approaches to evaluate digital platforms’ content moderation practices and drive globally consistent trust and safety outcomes.[40]

Dis- and misinformation

6.40Misinformation is false, misleading or deceptive information that can cause harm. The Australian Communications and Media Authority (ACMA) advises that misinformation can include:

made-up news articles

doctored images and videos

false information shared on social media

scam advertisements.[41]

6.41The ACMA explains:

Misinformation can pose a risk to the health and safety of individuals, as well as society more generally. We have seen this with misinformation about COVID-19 vaccines and 5G technology.

Some misinformation is deliberately spread – this is called disinformation – to cause confusion and undermine trust in governments or institutions. It is also used to attract users to webpages for financial gain, where they may click on ads or be lured into financial scams.

But not all misinformation is deliberately spread to cause harm. Sometimes users share misinformation without realising it.[42]

6.42The rise of dis- and misinformation online, particularly on social media platforms, was noted in several submissions.[43]

6.43A prime example of dis- and misinformation, was the threat it created to public health during the COVID-19 pandemic.[44] The HRLC explained:

The COVID-19 pandemic highlighted both the rapidly evolving nature of online mis- and disinformation, as well as its potential to undermine public health and fuel discrimination. Misleading content about the origin and nature of the virus spread rapidly across digital platforms in Australia. This disinformation combined with hate speech online to fuel discrimination and stoke violence against Asian people in Australia, threatening their safety. Australian health authorities pointed to the spread of online mis- and disinformation contributing to a spike in cases during a critical period in 2000.[45]

6.44Similarly, the HRLC advised the risks of disinformation influencing or even undermining democratic election processes:

Powerful false narratives can be quickly amplified to millions with the potential to confuse the public, distort outcomes and undermine public confidence in electoral processes and results.[46]

6.45It further noted:

Facebook identified 2.2 billion fake accounts as engaging in “coordinated inauthentic behaviour” in the lead-up to the 2019 election. Local disinformation campaigns, such as Mediscare in 2016 and Death Tax in 2019, are becoming a common feature of Australian elections as campaigns and news consumption move further online.[47]

Existing regulatory framework

6.46In 2021 a voluntary Australian Code of Practice on Disinformation and Misinformation (ACPDM) was created with eight signatories: Adobe, Apple, Google, Meta, Microsoft, Redbubble, TikTok and Twitter. The voluntary code is administered by Digital Industry Group Inc. (DIGI). Signatories release an annual transparency report on the measures they are taking to address dis- and misinformation.[48]

6.47The DITRDCA advised:

The ACMA’s June 2021 Report on the adequacy of digital platforms’ disinformation and news quality measures explored the question of whether the voluntary code meets community expectations. It made a number of findings and recommendations which have informed both DIGI’s recently revised Code, released in December 2022, and the Government’s decision to introduce new powers for the ACMA to combat dis- and misinformation on digital platforms.[49]

6.48The ACMA report noted ‘existing efforts by signatories to the voluntary industry code were a good first step in efforts to tackle misinformation and disinformation on digital platform services’.[50]

6.49Despite this, the government has proposed additional reserve powers for the ACMA to act, should industry efforts in regard to misinformation and disinformation be inadequate.[51]

6.50It was not clear to the Communications Alliance what evidence prompted the Government to make moves towards implementing additional regulations or legislation in this area when the voluntary code has only been in operation for a limited time.[52]

6.51Under the proposal, the new laws would provide the ACMA with additional powers to combat online dis- and misinformation. The new powers are designed to strengthen and support the existing voluntary code and will extend to nonsignatories of the voluntary code.[53] DITRDCA noted:

The new powers will enable the ACMA to monitor efforts and require digital platforms to do more, placing Australia at the forefront in tackling harmful online misinformation and disinformation, while balancing freedom of speech.

The proposed powers would:

enable the ACMA to gather information from digital platform providers, or require them to keep certain records about matters regarding misinformation and disinformation

enable the ACMA to request industry develop a code of practice covering measures to combat misinformation and disinformation on digital platforms, which the ACMA could register and enforce

allow the ACMA to create and enforce an industry standard (a stronger form of regulation), should a code of practice be deemed ineffective in combatting misinformation and disinformation on digital platforms.

The ACMA will not have the power to request specific content or posts be removed from digital platform services.[54]

6.52DIGI indicated to the committee its support for the proposal that would see the ACMA granted an additional oversight role of the ACPDM and dis- and misinformation more broadly.[55]

6.53Public consultation on the exposure draft of the Communications Legislation Amendment (Combatting Misinformation and Disinformation) Bill 2023 closed in August 2023.[56] The Bill has not yet been introduced into Parliament.

6.54Agencies in the Communications portfolio, including the ACMA and eSafety, also use a range of less direct levers to counter the effects of dis- and misinformation such as:

support for high quality public interest journalism, e.g. through funding of ABC and SBS

education programs to improve media and digital literacy in the community

provision of reliable information including in languages other than English, e.g. SBS provided critical COVID-19 health information in 63 languages

the ACMA, under the Broadcasting Services Act 1992, regulating news and journalism content on traditional radio and television broadcasting services

new powers currently under development that will allow the ACMA to combat online dis- and misinformation.[57]

Concerns Raised

Self-regulation and co-regulation

6.55Submissions argued that self-regulation through the ACPDM is inappropriate for digital platforms where community needs and public interest are not at the forefront of business practices.[58] For example, the HRLC submitted:

In the European Union, introduction of the Digital Services Act was driven by growing recognition that self- and co-regulatory models are inadequate and ineffective.[59]

6.56See Box 6.2 below for details of the EU’s approach to disinformation.

6.57The AMAN noted the current Australian voluntary code has no effective enforcement mechanism when the existing measures are not achieving the desired outcome.[60]

Box 6.2 International approaches

EUs Code of Practice on Disinformation

The EU 2022 Code of Practice on Disinformation is a key component of the EU’s strategy to combat disinformation online.It is also a voluntary code. The DITRDCA advised:

Under the updated Code, signatories commit to action in several domains, including; demonetising the dissemination of disinformation; ensuring the transparency of political advertising; empowering users; enhancing cooperation with fact-checkers; and providing researchers with better access to data.[61]

Additionally, the EU DSA establishes a co-regulatory approach to managing online dis- and misinformation, requiring designated online platforms to have measures in place to mitigate risks from the spread of illegal content.

The DSA is regulated by a new European Board of Digital Services and the European Commission, which will have ‘direct enforcement powers and be able to impose fines of up to 6% of a service’s global turnover for breaches.’[62]

Content moderation focus inappropriate

6.58The HRLC also advised measures to address online harm must look beyond content moderation and beyond the current focus, as indicated in transparency reports under the voluntary code, on content takedown and flagging.[63] This approach experiences a lag following identification of material, with action often occurring well after damage is done.[64] The HRLC noted:

… defining and identifying disinformation and harmful content is difficult, especially in real time and across hundreds of languages and countless societal contexts, and platforms are poorly placed to arbitrate the appropriateness of political content.[65]

6.59The committee was advised that there are also censorship risks arising from content moderation approaches.The HRLC stated:

Around the world, there is a growing body of evidence of excessively broad and vague laws becoming tools of governments to compel private companies to police communication in ways that unjustifiably limit public debate and freedom of expression. Responses to the problem of disinformation, whether on the part of governments, regulatory bodies or platforms themselves, must not infringe upon the right to freedom of expression and the right to access information – two cornerstones of democratic discourse.

Regulation that relies on content moderation can lead to limits on these rights in circumstances where it was never intended. This is because penalising platforms for content-moderation failures incentivises platforms to err on the side of caution, resulting in restrictions on freedom of expression and the right to access information in circumstances well beyond what was contemplated by the regulatory model.

By focusing on content moderation alone, governments miss the opportunity to address the upstream drivers of disinformation and hate speech, which will be far more effective in the long term.

For all these reasons, content moderation ought to be seen as only one part of any comprehensive and effective framework for digital regulation.[66]

6.60Further, the Developers Alliance emphasised that remedies to misinformation should focus on the users creating the misleading content rather than the platform. It advised:

We fundamentally believe that users should be accountable for what they post, not platforms. Platforms in turn should be obligated to publish and enforce policies for what is allowable on their service, and to enable their user community to participate in the content moderation process. Platforms should not be placed in the position of being punished for user behavior [sic] that violates their policies if it somehow evades reasonable moderation processes.[67]

6.61Meta highlighted that it is always conscious of the fine balance between free speech and managing dis- and misinformation when setting its internal policies. Meta approaches misinformation in the following way:

Our approach to misinformation falls under a three-part framework, and we have provided detail on each of these below:

Remove misinformation that is likely to directly contribute to imminent physical harm, interfere with the functioning of political processes (including voter and census interference), and certain highly deceptive manipulated media.

Reduce the spread of misinformation that is identified and verified as false by independent third party fact-checkers.

Promote authoritative information and develop tools to inform our users, so they can make their own decisions on what to read, trust and share.[68]

Proposed action to address dis- and misinformation

Regulatory intervention

6.62Some submissions supported regulatory intervention. For example, the HRLC supported the notion that regulator-drafted industry standards should be the norm.[69]

6.63The Consumer Policy Research Centre noted the voluntary ACPDM was inadequate, proposing that digital platforms be obliged to report their compliance with the code to an adequately resourced regulator that can assess and enforce the code. It stated ‘[c]ompliance reporting must be transparent so businesses can be publicly accountable for their performance against the obligations’.[70]

6.64The Special Broadcasting Service Corporation (SBS) also supported regulatory interventions ‘to protect the availability of reliable, trusted and impartial news and information’. It noted these features are particularly important in contexts in which distribution platforms (such as Meta) hold monopolistic, or near monopolistic, positions over markets and audiences.[71]

6.65Google emphasised the need for flexibility in any regulatory response to accommodate changes in the fast-moving digital world and leave room for innovative, adaptable solutions. It noted ‘the risk of enshrining into law responses or frameworks that might prove counterproductive or outdated in the months that follow’.[72] Further, Google stated:

We note that the threat models continue to evolve as bad actors change their attack patterns and Internet uses change over time–remedies that were effective two years ago may not be best suited to the next wave of challenges.[73]

Increase digital media literacy

6.66Some submissions called for increased digital and media literacy to assist identifying trustworthy online material.[74]

6.67The Australian Media Literacy Alliance (AMLA) defines media literacy as ‘the ability to critically engage with media in all aspects of life. It is a form of lifelong literacy essential for full participation in society’.[75]

6.68The committee was advised by the AMLA that Australia has low digital literacy confidence:

Research by AMLA core members shows that many adults and children have a low level of confidence in their own media abilities and most say they are not getting support to help them. Just one third of young Australians think they can tell fake news from real news, and almost two thirds (64%) of adults are not confident that can tell if a website can be trusted. Media literacy competency is negatively correlated with being more than 55 years old, having low literacy, living with a disability, having a low income or living in regional Australia.[76]

6.69The AMLA proposes working with the Australian Government to progress a national media literacy policy, strategy and framework for media literacy. The AMLA explained:

National approaches support media literacy educators, including schools, libraries, national organisations and media organisations, to work together in a coherent way while allowing for benchmarking over time. The strategy should work across all ages, but include particular attention to adults who were not able to access curriculum resources through schools, and those with lower media literacy skills and include the development of resources, toolkits and networking opportunities.[77]

6.70Similarly, the Australian Medical Association (AMA) advised digital literacy can enhance overall health literacy. This in turn can be bolstered by enhancing the prominence of reputable health sources such as Health Direct. The AMA noted sites like Health Direct ‘need to be the first sites to show up on search browsers … to help counter access to misinformation’.[78]

6.71The AMA also proposed further Australian Government investment in ‘longterm, robust online advertising to counter health misinformation, including on social media channels’.[79] It stated:

This should include promotion of vaccine safety, as well as campaigns on the health risks associated with alcohol, junk food, online gambling, tobacco and other drugs. We also implore international digital health platforms to acknowledge their public health responsibility and work actively to counter health misinformation on their platforms.[80]

6.72More broadly, the committee was advised by the Australian Library and Information Association (ALIA) and National and State Libraries Australasia (NSLA) that the Australian government needs to work with these organisations ‘to provide targeted support for library staff and educators dealing with the impact of new technologies on media and information literacy, alongside resources to support community media literacy’.[81]

Australian content discoverability

6.73The committee heard that access to Australian content, including television, music, publishing, video games and radio is at risk from algorithmic prominence decisions set by platforms, such as on transactional videoondemand and subscription services.[82]

6.74The DITRDCA noted:

… the ready availability of mass content produced in other countries on streaming services, particularly the United States, risks crowding out the voices of Australian storytellers. Australia’s content and cultural sectors also face various issues in how cultural content is made accessible and visible on digital platforms’ algorithmically-driven recommendation systems. In particular, the prominence of content on platforms and services can influence users’ viewing choices, thereby impacting the success of Australian content.[83]

TV and news journalism

6.75The ABC advised the committee that its ability to fulfill its role as a national broadcaster, such as its contribution to a range of social and cultural outcomes including trusted independent public interest journalism, is directly related to the ease with which its content and services can be found and accessed by Australians.[84]

6.76Audience viewing behaviour has shifted to increased use of technology and platforms including the use of aggregated search applications (aggregator apps) such as Apple TV and Google TV.[85]

6.77ABC highlighted that search and recommendation facilities on aggregator apps employ algorithms to determine the prominence of content presented to viewers.[86] Lack of algorithmic transparency means users and content providers alike are unaware of how programs are selected for prominence.[87] ABC commented:

The ABC provides data about the programs on ABC iview to the aggregator platforms to aid discovery via their aggregated search facilities and expects that, if a search turns up content on ABC iview and the user selects it, the platform will launch the ABC iview app to play it. However, within their apps and devices, Apple and Google are also promoting purchase of their own content through subscriptions or transactional video-on-demand (TVOD) purchases. Search and discovery of free ABC content can be effectively used to promote paid versions of the same programs. Moreover, the Corporation has no way of ensuring that its versions of programs will be most prominently displayed in search results.[88]

Publishers and authors

6.78ALIA and NSLA raised concerns about prominence decisions made by algorithms and the impacts on Australian creators and authors. They argued in favour of increased transparency:

Algorithmic transparency may be particularly important when applied to vertically integrated platforms. Platforms that act as producer, seller and distributor have an inbuilt incentive to promote their own products. For libraries concerned with creation of accurate, quality and local content, this is concerning. A very simple example is the way that platforms may promote international bestsellers or “homebrand” author content which is either cheaper to produce or in which they get a larger profit margin, over Australian creators. Lack of algorithmic transparency means that it is not possible to see the extent to which Australian authors being disadvantaged, and consumers are often unaware that the results that they see are the result of priorities and decisions made to maximise profits.[89]

6.79The Australian Publishers Association noted similar concerns:

A continued concern is that algorithms of retailers do little to support or make visible Australian cultural content. Given the number of books that are released globally in any year and available at any one time, there is a clear national value in Australian content being foregrounded and visible.[90]

6.80The committee was advised by DRW that platforms’ algorithmic decisions:

… can have the effect of flattening out the diversity of content, instead promoting and recommending the most popular, dominant content, often at the detriment of smaller, independent creatives and artists.[91]

6.81This is visible on streaming services, such as Spotify and Netflix, and with books on Amazon or Audible.[92]

Music and radio

6.82In the realm of music content streaming, the DITRDCA advised:

There are crossover issues between algorithmic transparency, and discoverability and availability of Australian music on music streaming services.[93]

6.83The DITRDCA is exploring ‘how Australian content can be more easily and readily accessed on music streaming services’. This would include:

… the role that streaming service algorithms play in accessibility and discoverability. Algorithmic transparency is also likely to be a priority for Australian music content creators as they aim to improve their revenue streams.[94]

6.84Commercial radio has an important role in providing ‘local content, news and emergency information to Australians who receive no other local free to air broadcast content.’[95] It also plays a socially inclusive role, engaging with local communities and supporting Australian stories and voices.[96]

6.85Commercial radio has strict legislated Australian content requirements, including minimum hours of hyper-local content broadcasts by regional stations, and rules for local staffing, facilities and news content in the event of a change of control. Commercial radio stations are also required under their Code of Practice to play minimum amounts of Australian music, including a portion of work by new artists.[97]

6.86Despite these important roles, Australian commercial radio is also affected by the prominence decisions of platforms. Commercial Radio & Audio advised that listening over connected devices represents an increasing portion of radio listenership. It stated:

If radio is not afforded prominence on these devices, it may face significant challenges. Prompt action must be taken to protect Australian radio long term, given the rapid growth of connected devices, particularly in vehicles and smart speakers.[98]

National Cultural Policy

6.87The DITRDCA has a policy and program development and delivery role under the National Cultural Policy.[99]

6.88In January 2023 the Australian Government released ‘Revive: a place for every story, a story for every place’ (Revive) – Australia’s National Cultural Policy for the next five years.[100]

6.89Among other requirements, Revive ‘introduces requirements for streaming services to ensure continued access to Australian screen content’ and ‘commits to Government action to ensure Australian music is “visible, discoverable and easily accessible across platforms to all Australians”’.[101]

6.90The DITRDCA’s submission stated:

Discoverability of Australian music content on streaming platforms and services is vital for Australian artists that distribute their music online, and compete with internationally recognised artists both abroad, and even in Australia.[102]

Prominence framework

6.91The committee was advised that the government has committed to introducing a prominence framework for broadcaster video-on-demand ‘to ensure local TV services are easy for Australian audiences to find on connected TV devices.’[103] However, this framework will not apply to the use of the algorithms on computing devices.[104]

6.92ABC suggested an expansion of the prominence framework could be considered as part of the framework review following its first year of operation.[105]

Targeted advertising and harmful product marketing

6.93Algorithms drive decisions around users’ exposure to particular advertising and can therefore give rise to safety issues directly affecting children, young people and vulnerable users.[106]

6.94eSafety noted it is difficult to measure the severity of harms caused by algorithmic decisions about online advertising as some types of content may be harmful only to a limited group of people and communities, such as dieting ads.[107]

6.95These concerns are discussed further in Chapter 8: Online safety.

Lack of transparency

6.96The overarching concern with algorithm use and ADM processes by digital platforms is a lack of transparency around the inputs, assumptions and intended purpose of particular algorithms or ADM.[108]

6.97eSafety advised:

While the use of algorithms offer social and economic benefits, their design and purpose can be opaque, and they can be also exploited by users (both businesses and individual end-users) as well as the digital platform, resulting in harm to some individuals.[109]

6.98AMLA emphasised the importance of transparency for consumers, including the need for accessible information on if and how consumers can control or influence advertising and personalised content they are exposed to.[110] AMLA stated:

Transparency plays an important role in enabling Australians to productively engage with digital media. Many Australians do not understand why they see what they see, as the algorithms that determine what content is served to whom, and based on what data, are invisible.[111]

6.99The HRLC noted a lack of voluntary transparency from platforms around their algorithm use:

The Australian public currently has few meaningful opportunities to ever understand how platforms and algorithms shape the information environment in which we form views and make decisions. It is largely thanks to industry whistleblowers that we have achieved any scrutiny and accountability for the big tech companies. Now and into the future, we should not need to rely on whistleblowers in order to understand the ways we are tracked and targeted, and the systems that determine the information that is delivered to us.[112]

6.100Further, CHOICE highlighted that the existence and use of ADMs is often hidden from consumers. It noted that:

This limits the ability of consumers to provide consent and restricts the ability of regulators and government to assess algorithms. The use of ADM by business should be clearly disclosed on consumer-facing platforms like websites or apps. ADM should also be disclosed in privacy policies in plain language, and should be available for regulators to audit.[113]

6.101eSafety welcomed the initial efforts by industry to increase transparency but noted they are limited and do not offer substantive explanations of the ways in which algorithms may or may not contribute to online harms.[114]

Transparency risks

6.102The committee also heard some concerns about the risks of sharing details about particular algorithm design, including potential manipulation or exploitation of the algorithms by users, businesses, or bad actors.[115]

6.103eSafety noted:

While eSafety appreciates the significance of minimising the opportunity for key algorithms to be ‘gamed’ by businesses or bad actors, it is important to ensure that digital platforms are accountable for the impact of their design choices and that users are empowered to make informed decisions.[116]

Current regulatory measures

6.104Currently, there is no comprehensive mechanism to assess the measures implemented by digital platforms to address harms arising from algorithm and ADM use.

6.105There is ongoing work across government and a number of mechanisms that target specific ‘problem areas’. For example, the government is progressing work to better understand the operation of algorithms on digital platforms which ‘will also consider findings from other work underway across Government throughout 2023’.[117] Work in progress includes:

the Attorney General’s Department preparing a response to the Privacy Act Review Report 2022;

the Digital Platforms Regulators Forum undertaking a literature review examining algorithms, including in recommender systems, content moderation and targeted advertising, ‘to enhance members’ understanding of associated regulatory risks’;[118]

the Department of Industry, Science and Resources conducting a review of AI regulation and ADM.[119]

6.106This work will culminate in a joint report to government by the first quarter of 2024 including:

… options to build capability around future algorithm research and expertise, and with advice on whether Government regulation of algorithms is required and, if so, what options for regulation are available.[120]

6.107eSafety advised that it is undertaking a number of regulatory steps ‘with the aim of increasing platforms’ transparency and accountability in relation to how algorithms can impact user safety, including exercising new powers under the OSA [Online Safety Act]’.[121]

6.108The DITRDCA also advised:

Noting the need for a consideration of algorithms to be aligned across Government, the Department has commenced preliminary work with other agencies to consider the type and scale of harms as a result of algorithmic use, as well as the current transparency levels of various algorithms.[122]

The argument for change

6.109eSafety explained:

Given the complex, evolving, and dynamic nature of algorithms and their use in the online environment, there is no single, fixed regulatory approach to address their potential benefits and harms.[123]

6.110The DITRDCA noted the concerns raised in the Final Report of the House of Representatives Select Committee’s Inquiry into Social Media and Online Safety about ‘the opaqueness of algorithms, which has the potential to heighten harms associated with them’.[124]The DITRDCA noted:

The Committee was of the view that a statutory requirement for platforms to provide the details of how they are working to minimise harms caused by algorithms would increase transparency without compromising commercially sensitive information. The Department is developing advice to the Government about the Committee’s findings.[125]

6.111The committee was advised that the Privacy Act Review Report 2022 makes proposals relating to algorithmic transparency and DIGI indicated these proposals should be contemplated in the context of the ongoing Government work on AI and ADM.[126]

6.112The Foundation for Alcohol Research and Education also raised the need to ‘implement mandatory requirements for digital platforms to make advertising information accessible, including their data practices and automated decision systems’.[127]

Possible solutions

International approaches of interest

6.113Internationally, policy makers and regulators are attempting to address the same concerns as Australia in relation to online safety and algorithmic transparency.

6.114Common approaches include increasing transparency and accountability, riskbased regulatory regimes, and systematic reporting requirements.[128]

6.115eSafety highlighted the following key international frameworks:

The UK Algorithmic Transparency Standard for use of algorithmic tools in government decision making:

The UK Central Data and Digital Office (CCDO) has developed theAlgorithmic Transparency Standard, a recording standard that helps public sector bodies provide clear information about the algorithmic tools they use and why they are using them. The Standard is one of the world’s first policies for transparency on the use of algorithmic tools in government decision making and is internationally renowned as best practice.[129]

The EU DSA:

The DSA includes data access obligation and transparency measures for major digital platforms, which extends to the algorithms used for recommending content or products to users.[130]

6.116The European Centre for Algorithmic Transparency supports enforcement of the DSA:

It contributes scientific and technical expertise to the Commission's exclusive supervisory and enforcement role of the systemic obligations on Very Large Online Platforms (VLOPs) and Very Large Online Search Engines (VLOSEs) provided for under the DSA.[131]

Risk-based framework

6.117Evidence to the committee emphasised the need for a risk-based regulatory model.

6.118For example, DIGI advised it ‘agrees with the need for risk-based frameworks to prevent and address issues related to the use of Artificial Intelligence (AI) and Automated Decision Making (ADM), such as preventing discrimination’.[132]

6.119The HRLC argued for a comprehensive transparency framework:

… regulation should be focused on increasing transparency and accountability. Transparency features of an appropriate regulatory model should place the onus on digital platforms to identify the risks posed by their platforms and the steps they will take in response, as well as providing information to users about their advertising and recommender systems. These obligations should be reinforced by a well resourced, independent regulator with the power to verify digital platforms’ information, and hold them accountable for failures to report and act.[133]

6.120CHOICE further highlighted that:

Consumers do not have strong protections from business’ use of harmful ADM practices. The Federal Government should legislate a risk-based ADM framework, with restrictions and prohibitions on harmful use. This can be achieved either by:

(a)expanding Office of the Australian Information Commissioner’s (OAIC) Privacy Impact Assessment compliance scheme to cover private businesses and to incorporate a risk-based framework on ADM; or

(b)establishing separate legislation which regulates the use of artificial intelligence including ADM.[134]

6.121The HRLC supported regulator-drafted industry standards as the norm.[135]

Lead regulator

6.122The OAIC advised the committee that consideration should be given to whether existing bodies could be resourced to take on any new regulatory activities in relation to algorithms. It noted:

Given the work already taking place to regulate algorithms and the range of existing regulatory bodies operating in relation to digital platforms, careful consideration should be given to whether establishing an oversight body would be the most appropriate model.[136]

6.123The Centre for AI and Digital Ethics supported enhancing the ACCC and the ACMA’s investigative capacity by expanding in-house technical research and data science expertise, to facilitate independent investigations and better discharge their regulatory duties.[137] Noting the ACCC holds a similar role to the US Federal Trade Commission (FTC), the Centre for AI and Digital Ethics advised:

We highlight the FTC's recent announcement of an Office of Technology which will hire individuals with backgrounds in technology to augment the FTC's consumer protection and antitrust missions. We advocate for a similar expansion of capabilities at the ACCC and support the parallel appointment of a Chief Technologist.[138]

6.124CHOICE similarly advised that:

The Federal Government should empower an existing regulator with the adequate resources and expertise to regulate ADM. The regulators most suitable for this role would be the ACCC or OAIC. Without strong regulators, consumers may be unfairly discriminated against, excluded, and profiled by ADM systems.[139]

Key features for regulatory intervention

6.125eSafety noted some general principles for any regulation or oversight of algorithms:

Any regulation or oversight of algorithms should safeguard the rights of users, preserve the benefits of these systems and foster healthy innovation. Important considerations for regulatory efforts targeted at algorithms include:

harmonising efforts across global government agencies to avoid a fragmented regulatory environment and unnecessary duplication

understanding the underlying ad-based revenue models which many large digital platforms employ and aligning incentives so that safety considerations are considered in tandem with business incentives

enhancing education and algorithmic literacy in recognition of the fastpaced nature of technology and that regulation alone is not able to remove all risks.[140]

6.126The ABC drew attention to features of the EU's DSA, which call for greater algorithm transparency and intend to hold platforms to account for the societal harms stemming from the use of their services. The ABC noted that these provisions appear to ‘provide some positive elements that could be reflected in any potential Australian regulatory model’.[141]

6.127The Tech Council of Australia advised that the EU General Data Protection Regulation ‘provides a right to meaningful information about the logic involved in automated decisions, such as those used in recommendations systems, credit and insurance risk systems, advertising programs and social networks’. It noted that the Attorney-General’s Department has recommended a similar right in the Privacy Act Review, alongside other transparency requirements that would apply to ADM.[142]

Algorithm code reviews

6.128The committee was advised by eSafety that code or pseudo-code reviews are not a practical solution given the expertise and time required and the ever-changing nature of the codes.[143] Clear legislative mandates would be required to support the associated access to potentially sensitive user data and the process employed to conduct the reviews.[144]

Data access regime/public interest research

6.129Evidence to the committee indicated a high level of support for transparency around advertising, recommender and content moderation systems for public interest research.[145]

6.130This is a key feature of the EU’s DSA and was noted by a range of submitters. For example, the HRLC advised:

Under the DSA, on request from the relevant regulator, platforms are required to provide external researchers with access to data for the purposes of conducting research on detection, identification and understanding of the systemic risks to which risk assessments apply, and assessment of the adequacy, efficiency and impacts of platforms risk mitigation measures. This data access regime is another significant feature of the DSA, and reinforces commitments made by signatories to the EU’s Strengthened Code of Practice on Disinformation.[146]

6.131The HRLC further noted:

Concerns raised by digital platforms about the implications of sharing their data with governments and researchers – such as privacy and exploitation concerns – are legitimate, but they are also surmountable. Rather than allowing these concerns to outweigh the critical value of transparency, they should be addressed through appropriate safeguards for protecting sensitive data.[147]

6.132The Centre for AI and Digital Ethics also supported protections for public interest research and enhancements to the regulator’s analytical and investigative capacities to combat the spread of dis- and misinformation. It noted that ‘[r]esearchers investigating harms of digital platforms have in the past been subject to reprisals and hindrances by the platforms themselves’.[148] It stated:

We support the introduction of mandatory reporting and disclosure laws for Big Tech companies, protections for public interest research, and enhancements to regulator's [sic] analytical and investigative capacities, to combat the spread of mis- and dis-information on digital platforms.[149]

Footnotes

[1]See, for example, Australian Broadcasting Corporation (ABC), Submission 4, p. 2; Department of Infrastructure, Transport, Regional Development, Communications and the Arts (DITRDCA), Submission 9, p. 4; Ms Mia Garlick, Regional Director of Policy, Meta, Proof Committee Hansard, 22August 2023, p. 17.

[2]Office of the eSafety Commissioner (eSafety), Submission 2, pp. 2–3.

[3]ABC, Submission 4, p. 2, DITRDCA, Submission 9, p. 4, CHOICE, Submission 54, p. 1.

[4]DITRDCA, Submission 9, p. 4.

[5]DITRDCA, Submission 9, p. 4.

[6]DITRDCA, Submission 9, p. 4.

[7]eSafety, Glossary, www.esafety.gov.au/about-us/glossary(accessed 17 October 2023).

[8]See, for example, ABC, Submission 4, pp. 2–3; eSafety, Submission 2; Office of the Australian Information Commissioner (OAIC), Submission 61; Commercial Radio & Audio (CRA), Submission43; DITRDCA, Submission 9, p. 4.

[9]eSafety, Submission 2, pp. 3–4.

[10]Alannah & Madeline Foundation, Submission 41, p. 6.

[11]eSafety, Submission 2, p. 3.

[12]eSafety, Submission 2, p. 3.

[13]eSafety, Submission 2, p. 4.

[14]ACCC, Digital Platforms Inquiry Final Report, June 2019, p. 345.

[15]ACCC, Digital Platforms Inquiry Final Report, June 2019, p. 349.

[16]CHOICE, Submission 54, p. 1.

[17]CHOICE, Submission 54, p. 1.

[18]CHOICE, Submission 54, p. 2.

[19]CHOICE, Submission 54, p. 2.

[20]Digital Rights Watch, Submission 68, p. 31.

[21]Digital Rights Watch, Submission 68, p. 31.

[22]Human Rights Law Centre, Submission 50, p. 8.

[23]Human Rights Law Centre, Submission 50, p. 8.

[24]Reset Australia, Submission 74, p. 6.

[25]Australian Muslim Advocacy Network (AMAN), Submission 44, p. 23.

[26]Ms Mia Garlick, Regional Director of Policy, Meta, Proof Committee Hansard, 22 August 2023, p. 23.

[27]Proof Committee Hansard, 22 August 2023, p. 23.

[28]AMAN, Submission 44, p. 23.

[29]ABC, Submission 4, p. 2.

[30]ABC, Submission 4, p. 2.

[31]Vault Cloud, Submission 38, [p. 7].

[32]Mr Joshua Zubak, Submission 27, p. 5.

[33]AMAN, Submission 44, pp. 15–16.

[34]AMAN, Submission 44, pp. 15–16.

[35]AMAN, Submission 44, p. 16.

[36]Digital Rights Watch (DRW), Submission 68, p. 27.

[37]DRW, Submission 68, p. 26.

[38]DRW, Submission 68, p. 26.

[39]DRW, Submission 68, p. 26.

[40]Meta, Submission 69, p. 70.

[41]ACMA, Online Misinformation, www.acma.gov.au/online-misinformation (accessed 21September2023).

[42]ACMA, Online Misinformation, (accessed 21September2023).

[43]See, for example, Centre for AI and Digital Ethics, Submission 23, [p. 15]; Human Rights Law Centre (HRLC), Submission 50, p. 6; Australian Media Literacy Alliance, Submission 55, p. 2.

[44]Centre for AI and Digital Ethics, Submission 23, [p. 15]; HRLC, Submission 50, p. 6.

[45]HRLC, Submission 50, p. 6.

[46]HRLC, Submission 50, p. 6.

[47]HRLC, Submission 50, pp. 6–7.

[48]ACMA, Online Misinformation, (accessed 21September2023).

[49]DITRDCA, Submission 9, p. 9.

[52]Communications Alliance, Submission 58, p. 6.

[53]DITRDCA, New ACMA powers to combat misinformation and disinformation,www.infrastructure.gov.au/have-your-say/new-acma-powers-combat-misinformation-and-disinformation (accessed 20 October 2023).

[54]DITRDCA, New ACMA powers to combat misinformation and disinformation, (accessed 20 October 2023).

[55]DIGI, Submission 65, p. 1.

[56]DITRDCA, New ACMA powers to combat misinformation and disinformation, (accessed 20 October 2023).

[57]DIGI, Submission 65, p. 9.

[58]HRLC, Submission 50, p. 9; CPRC, Submission 60, p, 9.

[59]HRLC, Submission 50, p. 9.

[60]AMAN, Submission 44, p. 7.

[61]DITRDCA, Submission 9, p. 10.

[62]ACMA, Submission 24, p. 3.

[63]HRLC, Submission 50, p. 8.

[64]HRLC, Submission 50, p. 8.

[65]HRLC, Submission 50, p. 8.

[66]HRLC, Submission 50, pp. 8–9.

[67]Developers Alliance, Submission 35, p. 6.

[68]Meta, Submission 69, p. 12.

[69]HRLC, Submission 50, p. 9.

[70]Consumer Policy Research Centre (CPRC), Submission 60, p. 9.

[71]Special Broadcasting Service Corporation (SBS), Submission 3, p. 6.

[72]Google, Submission 49, p. 15.

[73]Google, Submission 49, p. 16.

[74]Australian Media Literacy Alliance (AMLA), Submission 55, [p. 1]; Australian Medical Association (AMA), Submission 66, pp. 5–6; Australian Library and Information Association (ALIA) and National and State Libraries Australasia (NSLA), Submission 57, pp. 1, 6; Google, Submission 49, p.15.

[75]AMLA, Submission 55, [p. 1].

[76]AMLA, Submission 55, [pp. 1–2].

[77]AMLA, Submission 55, [pp. 1–2].

[78]AMA, Submission 66, p. 5.

[79]AMA, Submission 66, p. 6.

[80]AMA, Submission 66, p. 6.

[81]ALIA and NSLA, Submission 57, p. 8.

[82]See, for example, ABC, Submission 4; DITRDCA, Submission 9, p. 12; Screen Producers Australia, Submission 15; CRA, Submission 43; Free TV Australia, Submission 17; Australian Publishers Association, Submission 56.

[83]DITRDCA, Submission 9, p. 4.

[84]ABC, Submission 4, p. 3.

[85]ABC, Submission 4, pp. 3–4.

[86]ABC, Submission 4, p. 3.

[87]ABC, Submission 4, p. 4.

[88]ABC, Submission 4, p. 4.

[89]ALIA and NSLA, Submission 57, p. 6.

[90]Australian Publishers Association, Submission 56, p. 3.

[91]DRW, Submission 68, p. 27.

[92]DRW, Submission 68, p. 27.

[93]DITRDCA, Submission 9, p. 13.

[94]DITRDCA, Submission 9, p. 13.

[95]CRA, Submission 43, p. 10.

[96]CRA, Submission 43, p. 13.

[97]CRA, Submission 43, p. 11.

[98]CRA, Submission 43, p. 7.

[99]DITRDCA, Submission 9, p. 5.

[100]DITRDCA, Submission 9, p. 12

[101]DITRDCA, Submission 9, p. 12

[102]DITRDCA, Submission 9, p. 5.

[103]See, for example, ABC, Submission 4, p. 3; CRA, Submission 43, p. 1; DITRDCA, Prominence for connected TV devices, www.infrastructure.gov.au/media-communications-arts/television/prominence-connected-tv-devices (accessed 5 October 2023).

[104]ABC, Submission 4, p. 3.

[105]ABC, Submission 4, p. 4.

[106]eSafety, Submission 2, p. 4.

[107]eSafety, Submission 2, p. 4.

[108]See, for example, eSafety, Submission 2, p. 4; ABC, Submission 4, pp. 2–3; SBS, Submission 3, pp. 8–9; Mr Joshua Zubak, Submission 27, p. 5.

[109]eSafety, Submission 2, p. 2.

[110]AMLA, Submission 55, [p. 3].

[111]AMLA, Submission 55, [p. 3].

[112]HRLC, Submission 50, p. 10.

[113]CHOICE, Submission 54, p. 3.

[114]eSafety, Submission 2, p. 4.

[115]See, for example, eSafety, Submission 2, p. 4; Developers Alliance, Submission 35, p.3.

[116]eSafety, Submission 2, p. 4.

[121]eSafety, Submission 2, p. 2.

[122]DITRDCA, Submission 9, p. 5.

[123]eSafety, Submission 2, p. 5.

[124]DITRDCA, Submission 9, p. 5.

[125]DITRDCA, Submission 9, p. 5.

[126]DIGI, Submission 65, p. 2.

[127]Foundation for Alcohol Research and Education, Submission 33, p. 5.

[128]eSafety, Submission 2, p. 9.

[129]eSafety, Submission 2, p. 9.

[130]eSafety, Submission 2, p. 9.

[131] European Commission, European Centre for Algorithmic Transparency, https://algorithmic-transparency.ec.europa.eu/about_en (accessed 1 November 2023).

[132]DIGI, Submission 65, p. 2.

[133]HRLC, Submission 50, p. 10

[134]CHOICE, Submission 54, p. 3.

[135]HRLC, Submission 50, p. 9.

[136]OAIC, Submission 61, p. 6.

[137]Centre for AI and Digital Ethics, Submission 23, [p. 1].

[138]Centre for AI and Digital Ethics, Submission 23, [p. 3].

[139]CHOICE, Submission 54, p. 3.

[140]eSafety, Submission 2, p. 5.

[141]ABC, Submission 4, p. 3.

[142]Tech Council of Australia, Submission 63, p. 10.

[143]eSafety, Submission 2, p. 6.

[144]eSafety, Submission 2, p. 6.

[145]See, for example, ALIA and NSLA, Submission 57, p. 8; HRLC, Submission 50, p. 11; Centre for AI and Digital Ethics, Submission 23, [p. 15]; AMAN, Submission 44, p. 18; Gesellschaft Für Freiheitsrechte, Submission 25, pp. 3–4.

[146]HRLC, Submission 50, p. 11.

[147]HRLC, Submission 50, p. 11.

[148]Centre for AI and Digital Ethics, Submission 23, [p. 15]

[149]Centre for AI and Digital Ethics, Submission 23, [p. 3]