Search for:

In brief

The United Kingdom just hosted the first-ever global Artificial Intelligence (AI) Safety Summit on 1 and 2 November 2023, an event that brought international attention to the regulation of Artificial Intelligence (AI). UK Prime Minister Sunak underscored the urgency for global collaboration in the governance of AI, a technology that defies national boundaries and demands a collective regulatory approach. The summit’s outcome was the Bletchley Declaration, a commitment signed by leading nations, including Germany, the United States, and China, to enhance cooperation in the development and regulatory oversight of AI technologies.


Contents

  1. The present regulatory terrain: the emergence of soft law
  2. AI safety summit outcome: a collective aspiration for AI governance
  3. A path to global regulation?
  4. The road ahead: balancing ethical standards with Innovation

We take this as an opportunity to look at the current regulatory landscape, asking, is a global framework the right move to regulate the risks AI poses on human rights? We will focus on AI ethics. While definitions vary, AI ethics is the term given to a broad set of considerations for responsible AI that combine safety, security, human concerns, and environmental concerns.

The present regulatory terrain: the emergence of soft law

The so-called “soft law” on AI ethics has been around for a while. The OECD principles on AI, ratified by member countries in 2019, call for AI that advances inclusive growth, environmental sustainability, and overall benefits for society. These principles demand that AI systems adhere to legal statutes, human rights, and democratic values and include mechanisms for human intervention. They also advocate for a high level of transparency and responsible disclosure to empower individuals to understand and challenge AI-driven outcomes.

Other examples of AI soft law are:

  • The AI HLEG Ethics Guidelines for Trustworthy AI developed by the European Commission promoting AI systems that are lawful, ethical, and technically robust while taking into account its social environment.
  • The AI4People Summit Declaration highlighting principles such as the right to fairness and non-discrimination in AI systems.
  • The Montreal Declaration for Responsible AI outlining principles for the responsible development and deployment of AI, including fairness, accountability, and transparency.
  • Several international guidance papers, for instance, the UNESCO Recommendation on the Ethics of Artificial Intelligence or the new G7 International Guiding Principles on AI and the AI Code of Conduct.

All these guidelines, however, do not constitute any legally binding rules for states or companies using or developing AI technologies.

In the works, but not yet implemented, is the EU’s proposal for the AI Act. The AI Act, which is scheduled to be adopted by the end of this year, would be one of the first comprehensive regulations aimed specifically at AI and, once implemented, would be “hard law,” i.e., binding for in-scope providers or users of AI. The regulation, in its current version, follows a risk-based approach, defining four different risk levels AI can pose: an unacceptable risk, a high risk, a limited risk, or a minimal risk. AI systems with unacceptable risks will be prohibited; for high-risk systems, comprehensive regulation is envisaged. Limited and minimal-risk AI systems would be less regulated.

AI safety summit outcome: a collective aspiration for AI governance

Now, the Bletchley Declaration, emanating from the summit’s historic location, Bletchley Park, has garnered the support of all 29 participating countries. It acknowledges the vast potential of AI to offer global opportunities while also recognizing the significant risks it poses, including the possibility of “catastrophic” outcomes on a global scale. The declaration emphasizes the necessity for AI to be designed and utilized in a manner that is safe, human-centric, trustworthy, and responsible to ensure equitable benefits.

It further expresses concerns about the potential risks associated with highly capable AI systems – so-called frontier AI – particularly in domains like cybersecurity and biotechnology. It advocates for international cooperation to address these challenges and promote human-centric AI, particularly in the areas of ethics, transparency, fairness, and safety. The declaration calls for collaboration among nations, international organizations, companies, civil society, and academia to ensure AI safety. It emphasizes the responsibility of actors developing powerful and potentially harmful AI systems to prioritize safety, transparency, and accountability. It outlines an agenda focusing on identifying shared AI safety risks, building risk-based policies, and supporting an inclusive global dialogue and scientific research network on the great frontier that is AI safety. An international panel of experts is to help, enabling policymakers to make science-based decisions and ensuring that they keep pace with technological developments.

A path to global regulation?

Let’s take a step back. It is certainly true that AI knows no borders and cannot be successfully controlled by solely one country. AI poses fundamental threats to human rights protection, like discrimination or misinformation-based biases. So, do we need more than soft law? A binding global framework could be the way to go. Take aviation as an example. The Convention on International Civil Aviation created the basis for international aviation law and founded the International Civil Aviation Organization (ICAO), which is responsible for the worldwide alignment of air regulations.

This precedent raises the question: Could a similar framework be constructed for the governance of AI? Might the Bletchley Declaration even be a first step toward a global regulatory framework? But aviation is a much more self-contained industry than AI. AI is a cross-cutting technology used in various sectors and applications, from healthcare and education to the automotive industry and the finance sector. The legal and ethical issues surrounding AI are extremely diverse and intricate. As AI will affect many very different sectors, it is unlikely that a single international organization or agreement can comprehensively cover all aspects of AI regulation.

The road ahead: balancing ethical standards with Innovation

Instead, nations are poised to adopt diverse regulatory strategies, leading to a complex tapestry of laws, regulations, and standards. The challenge is to formulate regulations that reconcile ethical and legal imperatives with the need to foster innovation and economic progress. International collaboration to establish common principles and norms is essential for the effective governance of AI.

The European Union’s AI Act, for example, illustrates the critical balancing act that hard laws face: Too early and too detailed regulation can hinder innovation. Smaller companies, in particular, may not be able to keep up with the regulatory requirements. At the same time, AI regulation can quickly become outdated. The swift advancement of AI technology and its commercial applications may outpace current legislative efforts. The EU’s AI Act, which has undergone extensive amendments since its first draft in 2021 to address new developments like ChatGPT, exemplifies the need for regulatory agility and foresight.

Author

Anahita Thoms heads Baker McKenzie's International Trade Practice in Germany and is a member of our EMEA Steering Committee for Compliance & Investigations. Anahita is Global Lead Sustainability Partner for our Industrials, Manufacturing and Transportation Industry Group. She serves as an Advisory Board Member in profit and non-profit organizations, such as Atlantik-Brücke, and is an elected National Committee Member at UNICEF Germany. She has served for three consecutive terms as the ABA Co-chair of the Export Controls and Economic Sanctions Committee and as the ABA Vice-Chair of the International Human Rights Committee. Anahita has also been an Advisory Board Member (Beirätin) of the Sustainable Finance Advisory Council of the German Government.

Anahita has won various accolades for her work, including 100 Most Influential Women in German Business (manager magazin), Top Lawyer (Wirtschaftswoche), Winner of the Strive Awards in the category Sustainability, Pioneer in the area of sustainability (Juve), International Trade Lawyer of the Year (Germany) 2020 ILO Client Choice Awards, Young Global Leader of the World Economic Forum, Capital 40 under 40, International Trade Lawyer of the Year (New York) 2016 ILO Client Choice Awards. In 2023, Handelsblatt recognized her as one of Germany’s Dealmaker and “most sought after advisors of the country” in the field of sustainability.

Author

Dr. Alexander Ehrle is a member of the Firm's International Trade Practice in Baker McKenzie's Berlin office. Alexander studied law at the Universities of Heidelberg, Montpellier (France), Mainz, Munich and New York (NYU) specializing in Public International and European Law. He worked as advisor and member of a delegation of a developing country at the United Nations before qualifying for the German bar. He spent his clerkship with the Higher Regional Court in Berlin, the German Ministry of Foreign Affairs in Berlin and Tokyo as well as an international law firm in Frankfurt and Milan. He wrote his doctoral dissertation on the structural changes of public international law and their conceptualization in academic discourse basing his research on the governance of areas beyond national jurisdiction. Alexander is admitted to practice in Germany and New York. 

Alexander co-chairs the Business & Human Rights Committee of the American Bar Association’s International Law Section and has been recognized as one of 40 under 40 lawyers worldwide for foreign investment control by the Global Competition Review.

Author

Kimberley Fischer is a member of the International Trade Practice in Baker McKenzie's Berlin office. She joined the Firm in 2022. Kimberley studied law at the Ruprecht Karls University of Heidelberg and the Universidad de Deusto (Spain), with a focus on public international law and human rights. Prior to joining the Firm, Kimberley completed her legal traineeship at the Higher Regional Court of Frankfurt am Main, the German Federal Foreign Office in Berlin and at an international law firm in Brussels and Frankfurt am Main. She also gained significant experience in public (international) law as a research assistant at the University of Heidelberg and at a reputable law firm.