In brief
The United Kingdom just hosted the first-ever global Artificial Intelligence (AI) Safety Summit on 1 and 2 November 2023, an event that brought international attention to the regulation of Artificial Intelligence (AI). UK Prime Minister Sunak underscored the urgency for global collaboration in the governance of AI, a technology that defies national boundaries and demands a collective regulatory approach. The summit’s outcome was the Bletchley Declaration, a commitment signed by leading nations, including Germany, the United States, and China, to enhance cooperation in the development and regulatory oversight of AI technologies.
Contents
- The present regulatory terrain: the emergence of soft law
- AI safety summit outcome: a collective aspiration for AI governance
- A path to global regulation?
- The road ahead: balancing ethical standards with Innovation
We take this as an opportunity to look at the current regulatory landscape, asking, is a global framework the right move to regulate the risks AI poses on human rights? We will focus on AI ethics. While definitions vary, AI ethics is the term given to a broad set of considerations for responsible AI that combine safety, security, human concerns, and environmental concerns.
The present regulatory terrain: the emergence of soft law
The so-called “soft law” on AI ethics has been around for a while. The OECD principles on AI, ratified by member countries in 2019, call for AI that advances inclusive growth, environmental sustainability, and overall benefits for society. These principles demand that AI systems adhere to legal statutes, human rights, and democratic values and include mechanisms for human intervention. They also advocate for a high level of transparency and responsible disclosure to empower individuals to understand and challenge AI-driven outcomes.
Other examples of AI soft law are:
- The AI HLEG Ethics Guidelines for Trustworthy AI developed by the European Commission promoting AI systems that are lawful, ethical, and technically robust while taking into account its social environment.
- The AI4People Summit Declaration highlighting principles such as the right to fairness and non-discrimination in AI systems.
- The Montreal Declaration for Responsible AI outlining principles for the responsible development and deployment of AI, including fairness, accountability, and transparency.
- Several international guidance papers, for instance, the UNESCO Recommendation on the Ethics of Artificial Intelligence or the new G7 International Guiding Principles on AI and the AI Code of Conduct.
All these guidelines, however, do not constitute any legally binding rules for states or companies using or developing AI technologies.
In the works, but not yet implemented, is the EU’s proposal for the AI Act. The AI Act, which is scheduled to be adopted by the end of this year, would be one of the first comprehensive regulations aimed specifically at AI and, once implemented, would be “hard law,” i.e., binding for in-scope providers or users of AI. The regulation, in its current version, follows a risk-based approach, defining four different risk levels AI can pose: an unacceptable risk, a high risk, a limited risk, or a minimal risk. AI systems with unacceptable risks will be prohibited; for high-risk systems, comprehensive regulation is envisaged. Limited and minimal-risk AI systems would be less regulated.
AI safety summit outcome: a collective aspiration for AI governance
Now, the Bletchley Declaration, emanating from the summit’s historic location, Bletchley Park, has garnered the support of all 29 participating countries. It acknowledges the vast potential of AI to offer global opportunities while also recognizing the significant risks it poses, including the possibility of “catastrophic” outcomes on a global scale. The declaration emphasizes the necessity for AI to be designed and utilized in a manner that is safe, human-centric, trustworthy, and responsible to ensure equitable benefits.
It further expresses concerns about the potential risks associated with highly capable AI systems β so-called frontier AI β particularly in domains like cybersecurity and biotechnology. It advocates for international cooperation to address these challenges and promote human-centric AI, particularly in the areas of ethics, transparency, fairness, and safety. The declaration calls for collaboration among nations, international organizations, companies, civil society, and academia to ensure AI safety. It emphasizes the responsibility of actors developing powerful and potentially harmful AI systems to prioritize safety, transparency, and accountability. It outlines an agenda focusing on identifying shared AI safety risks, building risk-based policies, and supporting an inclusive global dialogue and scientific research network on the great frontier that is AI safety. An international panel of experts is to help, enabling policymakers to make science-based decisions and ensuring that they keep pace with technological developments.
A path to global regulation?
Let’s take a step back. It is certainly true that AI knows no borders and cannot be successfully controlled by solely one country. AI poses fundamental threats to human rights protection, like discrimination or misinformation-based biases. So, do we need more than soft law? A binding global framework could be the way to go. Take aviation as an example. The Convention on International Civil Aviation created the basis for international aviation law and founded the International Civil Aviation Organization (ICAO), which is responsible for the worldwide alignment of air regulations.
This precedent raises the question: Could a similar framework be constructed for the governance of AI? Might the Bletchley Declaration even be a first step toward a global regulatory framework? But aviation is a much more self-contained industry than AI. AI is a cross-cutting technology used in various sectors and applications, from healthcare and education to the automotive industry and the finance sector. The legal and ethical issues surrounding AI are extremely diverse and intricate. As AI will affect many very different sectors, it is unlikely that a single international organization or agreement can comprehensively cover all aspects of AI regulation.
The road ahead: balancing ethical standards with Innovation
Instead, nations are poised to adopt diverse regulatory strategies, leading to a complex tapestry of laws, regulations, and standards. The challenge is to formulate regulations that reconcile ethical and legal imperatives with the need to foster innovation and economic progress. International collaboration to establish common principles and norms is essential for the effective governance of AI.
The European Union’s AI Act, for example, illustrates the critical balancing act that hard laws face: Too early and too detailed regulation can hinder innovation. Smaller companies, in particular, may not be able to keep up with the regulatory requirements. At the same time, AI regulation can quickly become outdated. The swift advancement of AI technology and its commercial applications may outpace current legislative efforts. The EU’s AI Act, which has undergone extensive amendments since its first draft in 2021 to address new developments like ChatGPT, exemplifies the need for regulatory agility and foresight.