Chapter 9 - Emerging challenges

Chapter 9Emerging challenges

AI and the Metaverse

Overview

9.1This chapter explains potential risks and proposed regulations for emerging technologies, with a focus on the artificial intelligence (AI) and the metaverse.

Artificial intelligence

9.2Artificial intelligence is a broad term with no single agreed definition. AI algorithms guide how AI learns, adapts and make decisions.The Department of Industry, Science and Resources (DISR) explained AI as follows:

Artificial intelligence (AI) refers to an engineered system that generates predictive outputs such as content, forecasts, recommendations or decisions for a given set of human-defined objectives or parameters without explicit programming. AI systems are designed to operate with varying levels of automation.

Machine learning are the patterns derived from training data using machine learning algorithms, which can be applied to new data for prediction or decision-making purposes.

Generative AI models generate novel content such as text, images, audio and code in response to prompts.[1]

AI potential

9.3Submissions highlighted the opportunities in technological growth offered byAI.For example, Microsoft advised it:

… strongly believes in the potential for AI, particularly the recent advances in generative AI, to become a powerful tool for advancing critical thinking, stimulating creative expression, and discovering insights amid complex data and processes.[2]

9.4The Tech Council of Australia also emphasised that Australia should seek to ‘be a leader in best practice regulation of AI’. It noted that Australia should:

… capitalise on the opportunities and new innovations presented by the use of these technologies, while mitigating the risks and potential harms. This includes being an active contributor to the international standards process. Any interventions should be complemented by measures to build awareness and capability among the public and private sectors to develop, use and govern algorithmic systems responsibly.[3]

Risks

9.5While there are many potential benefits to the use of AI, stakeholders raised various concerns about the risks associated with this technology.

9.6Evidence to the committee highlighted concerns about AI risks are consistent with the risks of algorithm use more generally. Social harm risks including unauthorised use of data for profiling, potential for increasing inaccurate and untruthful content, targeting and manipulation of users, inadvertent exclusion of local content and cultural material, and perpetuation of bias and discrimination are some of the concerns raised.

9.7The committee notes digital platforms have implemented their own policies and systems striving for responsible AI.[4]

Growth of generative AI

9.8The Gradient Institute explained that generative AI ‘refers to a type of AI that is capable of creating new data or content, such as images, music, text, or videos’.[5]

9.9The Office of the eSafety Commissioner (eSafety) outlined the following examples of generative AI applications in its Generative AI position statement:

text-based chatbots, or programs designed to simulate conversations with humans, such as Anthropic’s Claude, Bing Chat, ChatGPT, Google Bard, and Snapchat’s My AI

image or video generators, such as the Bing Image Creator, DALL-E 2, Midjourney, and Stable Diffusion

voice generators, such as Microsoft VALL-E.[6]

9.10eSafety highlighted how generative AI has rapidly improved thanks to recent advancements including ‘the availability of more training data, enhanced artificial neural networks with larger datasets and parameters, and greater computing power’.[7]

9.11For example, the Gradient Institute advised the new generation of chatbots based on generative AI are increasingly capable. These AI are:

… able [to] sustain coherent, human-like conversations with users that include answering queries, writing poetry, solving riddles, summarising ideas and reasoning about the emotional states of others.[8]

9.12In a different use of generative AI, Amazon outlined how it is using a range of AI globally including creating customer reviews of products in their stores. Noting this system may not be operating in Australia yet, Mr Michael Cooley, Director, Public Policy Australia, Amazon Australia, advised:

The system is actually creating those reviews based on other reviews so as to highlight different elements that customers are looking for when they're searching for products, whether that be low price or convenience or high utility.[9]

9.13The committee noted the 2022 European Union's (EU) Europol agency's report that estimated that by 2026 about 90 per cent of online content will either be generated or manipulated by generative AI.[10]

Generative AI Risks

9.14The committee heard that AI also brings with it a range of uncertainties.

9.15The Australian Publishers Association, for example, noted that ‘[t]he potential impact of AI onbook publishing, authorship, and copyright is only now just being thought through’.[11]

9.16Submissions noted the growth in generative AI has the potential to amplify the risks seen in automated decision making (ADM) and traditional algorithms. eSafety commented:

Companies are moving quickly to develop and deploy their own generative AI technologies. This may lead to not enough attention being paid to risks, guardrails, or transparency for regulators, researchers, and the public.[12]

Consolidation of market dominance

9.17The committee was advised that generative AI has the potential to solidify the market dominance of digital platforms.The Gradient Institute explained:

Generative AI is so resource intensive that it is almost exclusively the domain of Big Tech companies. Not only do big tech companies have the required expertise, computing power, and data to build these models, they have the platforms from which to deploy them.

We believe that just as today, most interactions with content are moderated by Big Tech through social media, in the future most interactions with artificial personas will also be moderated through Big Tech. It is also clear that giving generative AI models more data and more computing power makes them much more capable, entrenching the existing advantages of Big Tech companies.[13]

9.18Digital Rights Watch (DRW) similarly paints a picture of a Big Tech dominated market:

Many economists have warned that AI will drive wages down, increase inequality and consolidate power in the hands of ever fewer corporations. But this is only true of AI developed, owned and led by private companies in the pursuit of profit.

Machine learning and AI could prove to have many social and economic benefits, but only if the technology is governed democratically for the benefit of all. Decisions about how such finite computing resources are allocated is not something that can be left in the hands of a few private companies.[14]

Profiling

9.19As discussed in Chapter 5: Data, the significant quantities of data captured by digital platforms can be consolidated to form detailed profiles of individual users including personal details, browsing habits and even geographical location.

9.20Digital platforms utilise this data to train their AI learning systems.[15] For example, one submission outlined:

Facebook's AI learning system, ‘FBLearner Flow’, ingests trillions of data points every day, from which its algorithmic models can make more than 6million predictions per second.[16]

9.21The committee was advised that, like personalisation of recommender systems and traditional algorithms, the use of data to personalise online experiences and prioritise content will have the same flow-on risks and social harm concerns.For example, the Gradient Institute noted:

… the amount of personal data that Big Tech companies have about individuals may enable them to personalise generative AI chatbots more effectively, further increasing their manipulation capabilities.[17]

Content risks

9.22The committee was also warned about the risk of generative AI resulting in an increase of inaccurate and untruthful content.

9.23Submissions advised that generative AI will likely increase the amount of inaccurate material online. The Australian Library and Information Association and National and State Libraries Australasia warned:

Generative AI by its nature is not directed towards finding “truth” or “accuracy”. And there is a significant risk that as the internet is populated by AI-generated content that this will become a self-referencing spiral as early mistakes are fed back into training data and reinforced.[18]

9.24They further noted:

Generative AI will introduce significant efficiencies and advances. It also is almost inevitably going to increase the amount of “bullshit” found online. As multiple commentators have noted over the years, the primary differential between bullshit and a lie is that someone who lies knows the truth and choses to conceal it, whereas bullshit is characterised by a disinterest in whether something is true or not.[19]

9.25The use of generative AI in search engines has raised concerns about factual inaccuracy, a lack of source referencing and the potential for misinformation to be spread. DRW highlighted digital platforms continue to release generative AI to the market without addressing these concerns:

In early 2023, controversy arose regarding the use of OpenAI’s large language model chat bot, ChatGPT, to write articles. Microsoft’s expanded partnership with OpenAI has seen ChatGPT technology already rolled out in their search engine Bing, and Google has since announced that it will be rolling out a chatbot named Bard to provide responses to some Google search queries.[20]

Identification of AI-created material

9.26Stakeholders expressed concern around the identification of material created by generative AI and sought information on how big tech was approaching this challenge, including the options for disclosing and watermarking AI generated content.

9.27Ms Kate Reader, General Manager, Digital Platforms Branch, the Australian Competition and Consumer Commission, noted:

… transparency is one of the important things we consider will be valuable for consumers, and a watermark would certainly help, particularly in the scam space but also more generally in relation to understanding how content is produced.[21]

9.28eSafety similarly expressed support for watermarking of AI generated material, advising:

… the benefits of watermarking would probably go beyond online safety. I suspect they would be very useful in terms of protecting consumers from misleading material and dealing with any concerns with news quality et cetera. But that was something that I think was raised in eSafety's ‘Tech trends position paper’ on generative AI, which was released last week.[22]

9.29Meta provided a commitment that its generative AI products within Australia will have watermarking features consistent with commitments made to the US Biden administration.[23] However, Meta was unable to provide an indicative timeframe for implementation, stating:

Some of our additional tooling will most likely only be available in the US initially but our intention is to ensure that things are applied consistently across the platform.[24]

Manipulation through synthetic relationships

9.30The advanced capabilities of generative AI raised further concerns around the ability to manipulate consumers on a more emotional level through artificial relationships.[25]

9.31The Gradient Institute advised that with the ability ‘to sustain coherent, humanlike conversations with users that include answering queries, writing poetry, solving riddles, summarising ideas and reasoning about the emotional states of others’, generative AI can be used to create synthetic personas. Users may form emotional bonds with synthetic personas leaving themselves vulnerable to manipulation by the entity in control of the chatbot.[26] The Gradient institute explained:

The chatbot might, for example, be designed to convince the user to use certain products, or to subscribe to a particular political belief. These preferences and beliefs could be integrated into the chatbot’s persona, making it difficult for users to even recognise the intentions of the chatbot’s owners.[27]

Current regulatory framework

9.32A number of existing regulatory measures specifically capture AI design and development activity.

9.33eSafety continues to promote the Safety by Design initiative which ‘puts user safety and rights at the forefront of design and development of online products and services’ and should apply to AI developments. eSafety advised:

The online industry can take a lead role by adopting a Safety by Design approach. Safety by Design is built on three principles: service provider responsibility, user empowerment and autonomy, and transparency and accountability. Technology companies can uphold these principles by making sure they incorporate safety measures at every stage of the product lifecycle.[28]

9.34In addition to a prevention approach, including education programs and resources, eSafety provides regulatory protections. The Online Safety Act 2021 (OSA) ‘provides eSafety with a range of powers and functions to address online safety issues, including those related to generative AI’.[29] eSafety noted:

eSafety’s four complaints-based investigations schemes do capture AIgenerated images, text, audio, and other content which meets the legislative definitions of:

class 1 material (such as CSEA material and terrorist and violent extremism content) and class 2 material (such as pornography)

intimate images produced or shared without consent (sometimes referred to as ‘revenge porn’)

cyberbullying material targeted at a child

cyber abuse material targeted at an adult.[30]

9.35Basic Online Safety Expectations reporting requirements include questions about the use of AI tools to detect illegal and harmful content.[31] This could be expanded in future to require service providers to report on the reasonable steps they are taking to ensure the safety of their generative AI functionalities.[32]

9.36One proposed industry code developed under the OSA for internet search engine services was redrafted ‘to capture proposed changes to search engines to incorporate generative AI features’ with the aim of addressing risks associated with the use of generative AI to generate class 1 material prior to being registered.[33]

9.37Additionally, the committee was advised that one of the Digital Platform Regulators Forum’s key strategic priorities is a focus on understanding and assessing the benefits, risks and harms of generative AI.[34]

9.38Further, DISR has published Australia’s AI Ethics Principles guiding businesses and governments to undertake responsible AI design, development and implementation, ensuring it is ‘safe, secure and reliable’.[35]The principles are a voluntary framework, intended to complement existing AI regulations and practices. The principles are:

Human, societal and environmental wellbeing: AI systems should benefit individuals, society and the environment.

Human-centred values: AI systems should respect human rights, diversity, and the autonomy of individuals.

Fairness: AI systems should be inclusive and accessible, and should not involve or result in unfair discrimination against individuals, communities or groups.

Privacy protection and security: AI systems should respect and uphold privacy rights and data protection, and ensure the security of data.

Reliability and safety: AI systems should reliably operate in accordance with their intended purpose.

Transparency and explainability: There should be transparency and responsible disclosure so people can understand when they are being significantly impacted by AI, and can find out when an AI system is engaging with them.

Contestability: When an AI system significantly impacts a person, community, group or environment, there should be a timely process to allow people to challenge the use or outcomes of the AI system.

Accountability: People responsible for the different phases of the AI system lifecycle should be identifiable and accountable for the outcomes of the AI systems, and human oversight of AI systems should be enabled.[36]

Recent consultations and studies

9.39A range of recent studies and consultation has been undertaken in the field of AI furthering discussion and knowledge in this emerging area.

9.40The committee noted DISR conducted consultation into AI and ADM regulations in 2022.[37]

9.41DISR also recently conducted a consultation process on safe and responsible AI considering ‘what the Australian Government can do to support the safe and responsible use of AI’. The consultation, which closed on 4 August 2023, is assessing:

voluntary approaches, like tools, frameworks and principles

enforceable regulatory approaches, like laws and mandatory standards.[38]

9.42In 2019 the Australian Government published an Artificial Intelligence Roadmap (the Roadmap).The Roadmap was co-developed by CSIRO’s Data61 and the then Department of Industry, Innovation and Science and ‘identifies strategies to help develop a national AI capability to boost the productivity of Australian industry, create jobs and economic growth, and improve the quality of life for current and future generations’.[39]

Guiding the future of AI development

Cooperation and consultation

9.43Stakeholders highlighted the essential need for international collaboration in the regulation of generative AI in addition to collaboration among existing local regulators. Indeed, Australia is involved in international forums and discussions:

eSafety is actively involved in bilateral and multilateral discussions on emerging technologies, including through the Global Online Safety Regulators Network, to promote Australia and eSafety’s perspectives on online safety regulatory issues.[40]

9.44Various submissions discussed the need for engagement with government, both domestically and internationally.

9.45Amazon noted that it was working with the Australian and other governments to grapple with issues relating to AI such as ethical guidelines and policy controls.[41]

9.46Meta also advised it has been engaging with DISR on the safe and responsible AI in Australia consultation process.[42]

9.47The Gradient Institute advised:

The Australian Government should seek to establish an expert advisory committee with diverse and relevant expertise on AI risks and AI safety, drawing broadly from industry, government, academia, the nonprofit sector and the broader civil society, to monitor the rapidly evolving AI risks and provide ongoing advice to the Government on how Australia should best respond to those risks.[43]

9.48Microsoft also highlighted the need for skilled technological leaders. It advised:

Australia, its allies, and other democratic societies will need multiple and strong technology leaders to help advance AI, with broader public policy leadership and cooperation on topics including data, AI supercomputing infrastructure and talent.[44]

Considerations

9.49A number of principles and values need to be considered when designing AI governance frameworks.

9.50The Consumer Policy Research Centre (CPRC) advised of six key principles ‘critical to include in AI and ADM architecture to ensure improved consumer outcomes.’ These are accessibility, accountability, agency, transparency, understandability and explainability, and sustainability. It stated:

We recommend that the Government also prioritise the development of innovation enablers to support technology that will create genuine benefits for all Australians. Innovation enablers should include:

investing in and enabling AI and ADM innovation in the not-for-profit sector to demonstrably improve community outcomes and welfare, and

implementing regulatory sandboxes to enable the safe testing and learning environment prior to deploying AI and ADM-enabled products and services at scale.[45]

9.51However, DRW noted the risks of regulatory sandboxes:

Regulatory sandboxes can be a dangerous experiment, even those for good reasons. A better approach might be pre-emptive regulation or cogovernance frameworks, like those suggested by Fairwork in their model standards for the fair implementation of artificial intelligence.[46]

9.52UNICEF advised the committee that AI development must ensure the wellbeing of children in the AI world. It recommended the development of AI systems needs to be guided ‘to ensure they are child-centred, protecting children, providing equitably for their rights, and empowering them to participate in an AI world’.[47]

9.53The OECD published guiding principles on AI in May 2019 ‘which includes human-centred values and fairness, transparency and explainability, robustness, security and safety, inclusive growth and sustainable development, and accountability’.[48]

9.54The committee also noted support for risk-based regulatory models.[49] For example, Amazon Web Services stated:

We support AI governance efforts that take a risk-based approach to addressing the responsible use of AI, such as Australia’s AI Ethics Principles, the OECD’s AI Principles, and Singapore’s AI Governance Framework.[50]

The case for regulation

9.55The committee was advised that a regulatory approach to generative AI was essential.

9.56The Gradient Institute advised ‘measures to mitigate the risks of generative AI manipulation are required urgently’.[51] Noting Big Tech companies are ‘competing to develop, deploy and monetise generative AI in general, and chatbots in particular’, the Gradient Institute highlighted:

… the scale of the risk, and the previous failure of market forces to control AI-driven manipulation by Big Tech in social media, imply the need for strong regulation of generative AI manipulation.[52]

International approaches

9.57eSafety outlines in its AI position statement the myriad of international approaches to generative AI, from voluntary principles and standards to legislation and mandatory requirements:

voluntary principles and governance frameworks (India)

AI governance frameworks, third-party testing and verification technology (Singapore)

application of existing consumer safety and data regulations and the signing of pledges around self-regulatory principles (US)

audits, risk and impact assessments and pre-launch disclosure requirements for ‘high-risk AI’ (Canada, UK and South Korea)

new and enforceable rules, including supervision powers (China)

dedicated AI legislation (EU, Canada, South Korea, Brazil)

intermediate bans on generative AI technology (Italy).[53]

9.58The committee was advised that the ‘newly published AI Risk Management Framework from the U.S. Institute of Standards and Technology offers a helpful roadmap for AI governance’.[54] Microsoft stated:

We encourage federal and local governments to leverage its contents to help organisations identify and address the potential risks of AI systems, encouraging interoperability with publicly developed best practices.[55]

Dedicated Legislation

9.59eSafety highlighted the EU’s Digital Services Act ‘provides harmonised rules on AI, outlining its risk- based approach to the regulation of AI’.[56]

9.60Further, eSafety advised the committee that the EU Artificial Intelligence Act is the first law on AI proposed by a major regulator.[57] The proposed law:

… assigns applications of AI to three risk categories: first, applications and systems that create an unacceptable risk, such as government-run social scoring are banned; second, high-risk applications, such as a CV-scanning tool that ranks job applicants, are subject to specific legal requirements; lastly, applications not explicitly banned or listed as high-risk are largely left unregulated.[58]

The metaverse

9.61The metaverse is a concept that is still evolving, and companies mean different things when referring to it. Generally, it refers to the concept of an immersive online world where people can gather to socialise, work or play.[59]

9.62The Alannah & Madeline Foundation explained:

There is no single, accepted definition of the 'metaverse' - indeed, many stakeholders don't use the term, preferring other framings such as 'immersive technologies' or 'extended reality'. As a generalisation, these technologies are understood to have the following features:

Realistic - 3D virtual environments which participants perceive as lifelike

Immersive - the participant feels partly or fully immersed in this space

Interactive - participants interact with their surroundings and other participants, engage in transactions, and create content

Interoperable or integrated - participants travel (fairly) seamlessly between virtual spaces, taking their virtual assets with them

24/7 - digital spaces exist in real time and are 'always on'

Virtual economy - a digital economy powers the metaverse, with blockchain and cryptocurrencies enabling trade and purchase of digital items.[60]

9.63eSafety commented on the potential benefits of new technologies such as the metaverse:

Immersive technologies and emerging online environments, such as the metaverse, provide a range of opportunities – in entertainment, education, defence, health sciences and other fields. Being able to practise a skill virtually or to understand an experience from an unfamiliar point of view are valuable applications. Immersive experiences can also improve the quality of life and independence of people who are unable to access actual experiences for a variety of reasons, including disability, age, caring responsibilities, transport access or remoteness, and can help people build empathy by experiencing a virtual world from different perspectives.[61]

Potential harms

9.64All harms relating to digital platforms examined in the preceding chapters of this report also apply to, and may be amplified in, the metaverse.

9.65Evidence specifically discussing potential harms arising from and being in the metaverse are discussed below.

Data and wellbeing concerns

9.66Submissions raised concern about the large amount of data likely to be collected in the metaverse.[62] Excess collection and collation of personal data could lead to an increase in scams, identity theft and fraud. Risks of data collection and aggregation are elaborated on in Chapter 5: Data.

9.67Data collection in the metaverse may include collection and aggregation of sensitive biometric information, such as eye movements, voice recordings, measurement of movements, heart rate, fingerprints, location and behaviour of users, adding additional risks around profiling, targeted marketing, fraud and data breaches.[63]

9.68The Foundation for Alcohol Research and Education (FARE) commented on the risks to wellbeing from metaverse data collection:

The monitoring of biometric data such as heart rate, eye movement and pupil dilation, which are integrated into the development of immersive visual reality technology for functionality purposes could be used to provide real-time psychological insights for marketing and retail purposes. For alcohol and other addictive products, this could mean that addictive tendencies and stressors could be detected and used to target marketing and encourage the purchase and use of harmful and addictive products like alcohol.[64]

9.69Targeted marketing of harmful products is an issue for consumers across all digital platforms. Submissions raised concerns that data collection in the metaverse may be highly targeted.[65]

9.70FARE argued that this targeting may be exacerbated in the metaverse:

However, the promotion and sale of alcohol in the Metaverse will be even more engaging tha[n] that available through the digital platforms discussed in the above sections of this submission as we see e-commerce transition to icommerce (immersive commerce). This has implications for both the effect of alcohol marketing and also for the increasing availability of alcohol in the community. The engaging nature of alcohol promotion and sale in the Metaverse is likely to have greater influence on people’s attitudes toward and purchasing and use of alcoholic products. Research with Australian adolescents and young adults indicates that engaging with digital marketing for unhealthy food and beverages is associated with consumption of unhealthy food and beverages, and that engaging with this marketing has a stronger impact than exposure alone (i.e., as was previously the case with traditional media advertising).[66]

9.71The Alannah & Madeline Foundation was concerned the metaverse could exacerbate the wellbeing concerns already being observed from increased digital engagement in the community such as loneliness, disassociation, desensitisation, dysregulated behaviours, body image problems, and social and political polarisation.[67]

Criminal behaviour

9.72Stakeholders were concerned that the metaverse could be used to engage in harmful or criminal behaviour, such as grooming, sexual abuse, bullying, discrimination, threats, defamation, identity theft, assault, cyber-attacks and scams.[68]

9.73eSafety commented on the potential for significant harms in the metaverse:

eSafety is concerned that these technologies can be used for cyberbullying, grooming children for online sexual abuse, and image-based abuse. Further, forms of assault might be experienced virtually including through a haptic suit. Augmented realities could also be used to fake a sexually explicit three-dimensional image or video of a real person and interact with it without their consent. While a virtual experience may be considered private due to being physically isolated, there is a risk that an intimate image or video created in that environment could then be livestreamed, stored, or shared without consent.[69]

9.74The Centre for AI and Digital Ethics submitted that the potential for scams is higher with new technology developments:

The mystique of new technology and ambiguity around regulation exposes Australians to scams and fraud-much as what happened with cryptocurrencies and NFTs [non-fungible token]. Virtual investment properly offers another easy backstory for scammers to tell their marks … The Australian government should expect a wave of fraud related to virtual investment property schemes because of the similarities with the cryptocurrency market. The Australian Competition and Consumer Commission 's report on Scam Activity stated that cryptocurrency investment scams were the "main driver" of the sharp 35% increase in investment scam losses in 2021 from the previous year, with Australians reporting $99 million lost to these scams. Given the growing interest in virtual reality, it is likely that similar fraudulent activities will occur in this market, posing a significant risk to Australian investors.[70]

Current regulatory framework

9.75Activities by digital platforms expanding into the metaverse are encapsulated by existing regulations in relation to safety, privacy, competition, and consumer requirements as explored in previous chapters. There is no metaverse specific legislation.

Proposed solutions

When to implement regulation

9.76Many submissions argued that regulation capturing operation and use of the metaverse should be implemented now, rather than waiting until metaverse technology is more developed.[71]

9.77The CPRC recognised the need for immediate regulation:

We are beyond the waiting game now when it comes to developing adequate consumer protections for products and services in the digital economy. It is clearly evident that a self-regulatory or self-assessed approach is no longer adequate in addressing the risks posed to consumers by large and powerful digital platforms. We need the Federal Government to be proactive and not wait for Australians to endure harm first before creating safeguards for them.[72]

9.78Mr Mark Nottingham, expert advisor to the United Kingdom Competition and Markets Authority's Digital Markets Unit, argued that it is best to implement regulation now, while the metaverse is still being designed, ‘since making incompatible changes to technical architectures at scale is notably difficult’.[73]

9.79Similarly, the Alannah & Madeline Foundation remarked:

We believe it is important for legislators and regulators to be on the 'front foot' with regard to new digital technologies. Many decision-makers took years to respond to developments like social media and smartphones; arguably this delay contributed to the many concerns that exist nowadays in relation to children’s use of technology.[74]

9.80The Centre for AI and Digital Ethics recommended regulators ‘take a proactive approach [to] a) policing violations of existing consumer protection law and b)protecting competition as firms explore virtual reality offerings’. It added:

However, because virtual reality products show genuine potential to become pervasive in civic life, it is critical that regulators ensure that competition concerns are adequately policed and that we learn the last decade's lesson limits of platform self-governance.[75]

Broad legislation

9.81Submissions supported broad risk or principle-based legislation to capture the metaverse and other new technologies as they emerge.[76]

9.82Children and Media Australia argued the metaverse is:

… likely to evolve rapidly, posing challenges for law and regulation in keeping up. This is all the more reason to craft regulations that strike at the heart of the risks posed, rather than addressing particular platforms or practices.[77]

9.83The Alannah & Madeline Foundation supported 'safety by design' and suggested regulatory models be crafted in a way that they can capture new technologies. Regulation should be:

high-level enough to remain relevant as new digital technologies emerge, such as immersive technologies or 'extended reality' (aka the 'metaverse')

Support investment in research and policy development concerning the likely societal, economic and legislative implications of immersive technologies emerging via pathways like the 'metaverse'. Any policy and legislation developed in response should be informed by expertise on the rights of the child and should treat the best interests of the child as the primary consideration in relation to the handling of children's data by digital platforms.[78]

9.84The Australian Research Alliance for Children and Youth (ARACY) recommended consideration of its evidence-based wellbeing framework, The Nest, which offers a guide for navigating the dimensions of the metaverse from a child’s perspective and providing a foundation for safety-by-design processes and indicators of success.[79] It commented:

Taking this approach can foster innovative solutions to potential harms or risks without necessarily stifling development and maximising the opportunities of the Metaverse. In addition, ongoing, real-time exploration of the hopes for the Metaverse by users as well as their attitudes to potential harms must be taken into consideration, as this can enable a less reactive approach to regulatory interventions.[80]

9.85ARACY added:

We consider that modelling a process and associated regulatory framework similar to the development and approval of new drugs and medical technologies, or even vehicle safety testing, would be a beneficial starting point. The required testing and reporting on safety-by-design components for Metaverse developments and the ethics considerations necessary for testing will assist in embedding the question ‘just because we could, should we?’ in research and development. A regulatory framework will subject all companies operating in this space to the same time frames and requirements prior to going to market.[81]

9.86Similarly, FARE argued in favour of proactive and systematic regulatory measures for the metaverse and other emerging technologies that include the following principles:

A primary consideration of preventing harm from digital platform business activities, and

Minimum standards that require [that] digital platforms do not act in ways that put people using platforms at risk of harm, including to their health and wellbeing.[82]

International collaboration

9.87Submissions suggested international cooperation between governments and companies is important to ensure adequate and consistent regulation.[83]

9.88ARACY supported a global framework:

An international regulatory framework could be developed and regularly updated through an international multidisciplinary body. This could fall under the auspices of the International Telecommunications Union with each member state implementing the regulations through their national bodies, for example, the eSafety Commission in Australia. The international regulatory framework would then provide the basis for developing national regulations and legislations. A multidisciplinary body should also be appointed to monitor the regulations and to determine any changes to them over time. Enforcement could continue, in Australia, under the eSafety Commission or affiliated body.[84]

9.89Blockchain Australia also indicated there is a need for legislative agreements across borders and added:

A key component will be intelligence gathering and collaboration with the tech firms involved in developing these experiences. This emphasises the need to build partnerships with Telco companies, which possess the infrastructure through which attacks are perpetrated. Developers must be included in the security process and need precision training on the vulnerabilities that they are likely to face. Governments need to start developing approaches to defend against threats now, such as the controls to remove bad actors and user education.[85]

Footnotes

[1]Department of Industry, Science and Resources (DISR), Safe and Responsible AI: discussion paper, p.5.

[2]Microsoft, Submission 47, p. 10.

[3]Tech Council of Australia, Submission 63, p. 10.

[4]See, for example, Microsoft, Submission 47, pp. 10–11; Meta, Submission 69, pp. 51–52; Amazon Web Services (AWS), Submission 46, p. 4.

[5]Gradient Institute, Submission 30, p. 2.

[6]Office of the eSafety Commissioner (eSafety), Generative AI – position statement, www.esafety.gov.au/industry/tech-trends-and-challenges/generative-ai (accessed 30 October 2023).

[7]eSafety, Generative AI – position statement, (accessed 30 October 2023).

[8]Gradient Institute, Submission 30, p. 2.

[9]Proof Committee Hansard, 22 August 2023, p. 5.

[10]Senator Shoebridge, Proof Committee Hansard, 22 August 2023, p. 20.

[11]Australian Publishers Association, Submission 56, p. 4.

[12]eSafety, Generative AI – position statement, (accessed 30 October 2023).

[13]Gradient Institute, Submission 30, p. 3.

[14]Digital Rights Watch (DRW), Submission 68, p. 19.

[15]Mr Joshua Zubak, Submission 27, pp. 1–2.

[16]Mr Joshua Zubak, Submission 27, p. 1.

[17]Gradient Institute, Submission 30, p. 3.

[18]Australian Library and Information Association (ALIA) and National and State Libraries Australasia (NSLA), Submission 57, pp. 5–6.

[19]ALIA and NSLA, Submission 57, pp. 5–6.

[20]DRW, Submission 68, p. 28.

[21]Proof Committee Hansard, 22 August 2023, p. 36.

[22]Ms Morag Bond, Executive Manager, Industry Regulation and Legal Services, eSafety, Proof Committee Hansard, 22 August 2023, p. 36.

[23]Ms Mia Garlick, Regional Director of Policy, Meta, Proof Committee Hansard, 22 August 2023, p. 21.

[24]Ms Mia Garlick, Regional Director of Policy, Meta, Proof Committee Hansard, 22 August 2023, p. 21.

[25]eSafety, Generative AI – position statement, (accessed 30 October 2023).

[26]Gradient Institute, Submission 30, p. 2.

[27]Gradient Institute, Submission 30, pp. 2–3.

[28]eSafety, Generative AI – position statement, https://www.esafety.gov.au/industry/tech-trends-and-challenges/generative-ai (accessed 30 October 2023).

[34]Ms Elizabeth Hampton, Deputy Commissioner, Office of the Australian Information Commissioner, Proof Committee Hansard, 22 August 2023, p.27.

[36]DISR, Australia’s AI Ethics Principles, (accessed 31 October 2023).

[38]DISR, Responsible AI in Australia: have your say, 1 June 2023, www.industry.gov.au/news/responsible-ai-australia-have-your-say (accessed 31 October 2023).

[39]CSIRO, Artificial Intelligence Roadmap, ‘Artificial Intelligence: solving problems, growing the economy and improving our quality of life’, www.csiro.au/en/research/technology-space/ai/artificial-intelligence-roadmap (accessed 31 October 2023).

[41]Mr Michael Cooley, Director, Public Policy Australia, Amazon Australia, Proof Committee Hansard, 22 August 2023, p. 5.

[42]Ms Mia Garlick, Regional Director of Policy, Meta, Proof Committee Hansard, 22 August 2023, p. 21.

[43]Gradient Institute, Submission 30, p. 4.

[44]Microsoft, Submission 47, p. 11.

[45]Consumer Policy Research Centre, Submission 60, p. 4.

[46]DRW, Submission 68, p. 32.

[47]UNICEF, Submission 14, p. 11.

[48]eSafety, Submission 2, p. 10.

[49]See for example, eSafety, Tech Trends Position Statement – Generative AI, August 2023, p. 22; AWS, Submission 46, p. 4.

[50]AWS, Submission 46, p. 4.

[51]Gradient Institute, Submission 30, p. 3.

[52]Gradient Institute, Submission 30, p. 3.

[54]Microsoft, Submission 47, p. 11.

[55]Microsoft, Submission 47, p. 11.

[56]eSafety, Submission 2, p. 10.

[57]eSafety, Submission 2, pp. 9–10.

[58]eSafety, Submission 2, p. 10.

[59]Merriam-Webster, What is the ‘metaverse’?, 30 October 2021, www.merriam-webster.com/wordplay/meaning-of-metaverse (accessed 30 October 2023).

[60]Alannah & Madeline Foundation, Submission 41, p. 11.

[61]eSafety, Submission 2, p. 15.

[62]See, for example, Foundation for Alcohol Research and Education (FARE), Submission 33, p. 14; eSafety, Submission 2, p. 15; Alannah & Madeline Foundation, Submission 41, p. 12.

[63]See, for example, FARE, Submission 33, p. 14; eSafety, Submission 2, p. 15; Alannah & Madeline Foundation, Submission 41, p. 12.

[64]FARE, Submission 33, p. 14.

[65]Cancer Council, Submission 5, [p. 2].

[66]FARE, Submission 33, p. 14.

[67]Alannah & Madeline Foundation, Submission 41, p. 12.

[68]See, for example, eSafety, Submission 2, p. 15; Alannah & Madeline Foundation, Submission 41, p. 12.

[69]eSafety, Submission 2, p. 15.

[70]Centre for AI and Digital Ethics, Submission 23, [p. 7].

[71]See, for example, Children and Media Australia, Submission 53, p. 4; Alannah & Madeline Foundation, Submission 41, p. 10; Mr Mark Nottingham, Submission 37, p. 6.

[72]CPRC, Submission 60, p. 8.

[73]Mr Mark Nottingham, Submission 37, p. 6.

[74]Alannah & Madeline Foundation, Submission 41, pp. 10–11.

[75]Centre for AI and Digital Ethics, Submission 23, [p. 7].

[76]See, for example, CPRC, Submission 60, p. 8; Children and Media Australia, Submission 53, p. 4; Australian Research Alliance for Children and Youth (ARACY), Submission 21, [p. 2]; FARE, Submission 33, p.5.

[77]Children and Media Australia, Submission 53, p. 4.

[78]Alannah & Madeline Foundation, Submission 41, p. 5.

[79]ARACY, Submission 21, [p. 2].

[80]ARACY, Submission 21, [p. 4].

[81]ARACY, Submission 21, [p. 4].

[82]FARE, Submission 33, p. 5.

[83]See, for example, ARACY, Submission 21, [p. 4]; Blockchain Australia, Submission 45, [p. 13.]

[84]ARACY, Submission 21, [p. 4].

[85]Blockchain Australia, Submission 45, [p. 13].