Search for:

The Australian Government has introduced its first iteration of the ‘Voluntary AI Safety Standard’ and released a proposals paper on mandatory guardrails for AI in high-risk settings.

In brief

On 5 September 2024, the Australian Government (Department of Industry, Science and Resources) introduced the Voluntary AI Safety Standard (Voluntary Standard). The Voluntary Standard provides practical guidance for organisations involved in the AI supply chain through ten voluntary guardrails, which focus on testing, transparency, and accountability requirements.

In addition to this Voluntary Standard, ten mandatory guardrails for AI systems in “high-risk” settings have been proposed in the Australian Government’s Proposals Paper for Introducing Mandatory Guardrails for AI in High-Risk Settings’ (Proposals Paper).

The Proposals Paper also outlines three potential regulatory options to mandate the proposed guardrails in high-risk AI settings:

  1. A domain specific approach: adapting existing regulatory frameworks to include the guardrails through targeted review of existing legislation.
  2. A framework approach: adapting existing regulatory frameworks through framework legislation.
  3. A whole of economy approach: introducing a new AI-specific Act to implement the proposed mandatory guardrails for AI in high-risk settings.

The Australia Government is seeking submissions on the Proposals Paper as part of its Consultation. The Consultation closes Friday, 4 October 2024.  

In more detail

The new safety measures are based on the Australian Government’s interim response to the ‘Safe and Responsible AI in Australia‘ discussion paper, which was released earlier this year and discussed in our client alert. That interim response committed to developing the Voluntary Standard and considering the introduction of mandatory safeguards for AI in high-risk settings.

The Guardrails under the Voluntary Standard

At a glance, the ten voluntary guardrails in the Voluntary Standard include: 

  1. Establish, implement, and publish an accountability process including governance, internal capability and a strategy for regulatory compliance;
  2. Establish and implement a risk management process to identify and mitigate risks;
  3. Protect AI systems, and implement data governance measures to manage data quality and provenance;
  4. Test AI models and systems to evaluate model performance and monitor the system once deployed;
  5. Enable human control or intervention in an AI system to achieve meaningful human oversight;
  6. Inform end-users regarding AI-enabled decisions, interactions with AI and AI-generated content;
  7. Establish processes for people impacted by AI systems to challenge use or outcomes;
  8. Be transparent with other organisations across the AI supply chain about data, models and systems to help them effectively address risks;
  9. Keep and maintain records to allow third parties to assess compliance with guardrails; and
  10. Engage stakeholders and evaluate their needs and circumstances, with a focus on safety, diversity, inclusion and fairness.

Proposed Mandatory Guardrails

The first nine proposed mandatory guardrails set out in the Proposals Paper are identical in form to the first nine voluntary guardrails under the Voluntary Standard. The tenth guardrail of the proposed mandatory guardrails differs as follows:

  1. Undertake conformity assessments to demonstrate and certify compliance with the guardrails.

Both the voluntary guardrails and proposed mandatory guardrails are intended to align with national and international standards, including ISO/IEC 42001:2023 (Artificial Intelligence Management System) and the developments in AI regulation in jurisdictions such as the EU, Canada, and the UK. 

Application

Despite this alignment in form, the Voluntary Standard has a wider application than the proposed mandatory guardrails. The Voluntary Standard will apply to all organisations, including:

  • AI developers: an organisation or entity that designs, develops, tests and provides AI technologies, such as AI models and components; and
  • AI deployers: an individual or organisation that supplies or uses an AI system to provide a product or service. Deployment can be internal to an organisation, or external and impacting others, such as customers or other people who are not deployers of the system. As most AI deployers rely on AI systems developed or provided by third parties, the Voluntary Standard also provides procurement guidance.

The proposed mandatory guardrails will apply to AI developers and AI deployers, but only in the context of “high-risk” AI settings. The use of AI may be “high-risk” based on:

  • its intended and foreseeable uses. For example, due to the risk of adverse impacts to an individuals’ human rights, health and safety, or legal rights. High-risk use cases identified in other countries include AI used in biometrics, employment, law enforcement, and critical infrastructure; or
  • in the case of “general-purpose AI” (GPAI), the mandatory guardrails will apply to all GPAI models. GPAI models are defined in the Proposals Paper as “an AI model that is capable of being used, or capable of being adapted for use, for a variety of purposes, both for direct use as well as for integration in other systems”.  

Next steps

Businesses that either develop or deploy AI should consider adopting the voluntary guardrails to ensure best practice. The Australian Government suggests starting with guardrail one to create core foundations to your business’ use of AI.

Businesses should also consider making a submission in response to the Proposals Paper for the mandatory guardrails, which is open until Friday, 4 October 2024.

This Voluntary Standard and Proposals Paper come after a flurry of AI regulatory developments in 2023, as discussed in our ‘Year in Review‘ alert. The Australian Government has flagged that the Voluntary Standard is a first iteration, which it will update over the next six months.

Author

Adrian Lawrence is the head of the Firm's Asia Pacific Technology, Media & Telecommunications Group. He is a partner in the Sydney office of Baker McKenzie where he advises on media, intellectual property and information technology, providing advice in relation to major issues relating to the online and offline media interests. He is recognised as a leading Australian media and telecommunications lawyer.

Author

Toby Patten is a partner in Baker McKenzie's Technology and Healthcare teams in Melbourne. He joined the Firm in March 2005.

Author

Anne has been with Baker McKenzie since 2001. Prior to that, she spent four years with the Australian Attorney-General's Department/Australian Government Solicitor mostly working on large IT projects.
In her time at Baker McKenzie, Anne has spent 18 months working in London (2007-2008) and, more recently, three years working in Singapore (2017-2020).

Author

Caitlin Whale is a partner in the Technology, Communications and Commercial team. She advises on technology, outsourcing and commercial law issues. Caitlin advises on technology and rights-specific issues in large corporate and commercial transactions, and has experience in managing multi-territory licensing and divestments for multi-national clients. She has extensive experience in advising on a range of commercial arrangements, including licence and software agreements, research and development and collaboration agreements, supply agreements and distribution agreements. Caitlin has experience in rights management and enforcement, advising on the ownership, registration, exploitation and protection of copyright, trade marks and designs. She has represented rights-owners and users and has particular experience in relation to online infringement issues.

Author

Jarrod Bayliss-McCulloch is a special counsel in the Information Technology & Commercial department at the Melbourne office of Baker McKenzie and advises on major technology-driven transactions and regulatory issues spanning telecommunications, intellectual property, data privacy and consumer law with a particular focus on digital media and new product development. Jarrod joined the Firm in 2009 and his prior experience includes working in strategy consulting and development economics.