The Australian Government has introduced its first iteration of the ‘Voluntary AI Safety Standard’ and released a proposals paper on mandatory guardrails for AI in high-risk settings.
In brief
On 5 September 2024, the Australian Government (Department of Industry, Science and Resources) introduced the Voluntary AI Safety Standard (Voluntary Standard). The Voluntary Standard provides practical guidance for organisations involved in the AI supply chain through ten voluntary guardrails, which focus on testing, transparency, and accountability requirements.
In addition to this Voluntary Standard, ten mandatory guardrails for AI systems in “high-risk” settings have been proposed in the Australian Government’s Proposals Paper for Introducing Mandatory Guardrails for AI in High-Risk Settings’ (Proposals Paper).
The Proposals Paper also outlines three potential regulatory options to mandate the proposed guardrails in high-risk AI settings:
- A domain specific approach: adapting existing regulatory frameworks to include the guardrails through targeted review of existing legislation.
- A framework approach: adapting existing regulatory frameworks through framework legislation.
- A whole of economy approach: introducing a new AI-specific Act to implement the proposed mandatory guardrails for AI in high-risk settings.
The Australia Government is seeking submissions on the Proposals Paper as part of its Consultation. The Consultation closes Friday, 4 October 2024.
In more detail
The new safety measures are based on the Australian Government’s interim response to the ‘Safe and Responsible AI in Australia‘ discussion paper, which was released earlier this year and discussed in our client alert. That interim response committed to developing the Voluntary Standard and considering the introduction of mandatory safeguards for AI in high-risk settings.
The Guardrails under the Voluntary Standard
At a glance, the ten voluntary guardrails in the Voluntary Standard include:
- Establish, implement, and publish an accountability process including governance, internal capability and a strategy for regulatory compliance;
- Establish and implement a risk management process to identify and mitigate risks;
- Protect AI systems, and implement data governance measures to manage data quality and provenance;
- Test AI models and systems to evaluate model performance and monitor the system once deployed;
- Enable human control or intervention in an AI system to achieve meaningful human oversight;
- Inform end-users regarding AI-enabled decisions, interactions with AI and AI-generated content;
- Establish processes for people impacted by AI systems to challenge use or outcomes;
- Be transparent with other organisations across the AI supply chain about data, models and systems to help them effectively address risks;
- Keep and maintain records to allow third parties to assess compliance with guardrails; and
- Engage stakeholders and evaluate their needs and circumstances, with a focus on safety, diversity, inclusion and fairness.
Proposed Mandatory Guardrails
The first nine proposed mandatory guardrails set out in the Proposals Paper are identical in form to the first nine voluntary guardrails under the Voluntary Standard. The tenth guardrail of the proposed mandatory guardrails differs as follows:
- Undertake conformity assessments to demonstrate and certify compliance with the guardrails.
Both the voluntary guardrails and proposed mandatory guardrails are intended to align with national and international standards, including ISO/IEC 42001:2023 (Artificial Intelligence Management System) and the developments in AI regulation in jurisdictions such as the EU, Canada, and the UK.
Application
Despite this alignment in form, the Voluntary Standard has a wider application than the proposed mandatory guardrails. The Voluntary Standard will apply to all organisations, including:
- AI developers: an organisation or entity that designs, develops, tests and provides AI technologies, such as AI models and components; and
- AI deployers: an individual or organisation that supplies or uses an AI system to provide a product or service. Deployment can be internal to an organisation, or external and impacting others, such as customers or other people who are not deployers of the system. As most AI deployers rely on AI systems developed or provided by third parties, the Voluntary Standard also provides procurement guidance.
The proposed mandatory guardrails will apply to AI developers and AI deployers, but only in the context of “high-risk” AI settings. The use of AI may be “high-risk” based on:
- its intended and foreseeable uses. For example, due to the risk of adverse impacts to an individuals’ human rights, health and safety, or legal rights. High-risk use cases identified in other countries include AI used in biometrics, employment, law enforcement, and critical infrastructure; or
- in the case of “general-purpose AI” (GPAI), the mandatory guardrails will apply to all GPAI models. GPAI models are defined in the Proposals Paper as “an AI model that is capable of being used, or capable of being adapted for use, for a variety of purposes, both for direct use as well as for integration in other systems”.
Next steps
Businesses that either develop or deploy AI should consider adopting the voluntary guardrails to ensure best practice. The Australian Government suggests starting with guardrail one to create core foundations to your business’ use of AI.
Businesses should also consider making a submission in response to the Proposals Paper for the mandatory guardrails, which is open until Friday, 4 October 2024.
This Voluntary Standard and Proposals Paper come after a flurry of AI regulatory developments in 2023, as discussed in our ‘Year in Review‘ alert. The Australian Government has flagged that the Voluntary Standard is a first iteration, which it will update over the next six months.