Search for:

The Office for Product Safety and Standards (OPSS) published a report on 23 May 2022 which considered the impact of artificial intelligence (AI) on product safety. This issue is also being considered in a number of other jurisdictions (see, for example, the EU’s Proposal for a Regulation laying down harmonised rules on AI).

The report provides a framework for considering the impact of AI consumer products on existing product safety and liability policy. This framework seeks to support the work of policymakers by highlighting the main considerations that should be taken into account when evaluating and developing product safety and liability policy for AI consumer products. No timeline is stated in the report for that evaluation/ development to take place, but the report makes clear the view that work is needed to ensure the UK’s product safety and liability regime can deal with AI developments.

  • Potential negative implications of AI

The report considers the potential negative implications of AI use on the safety of consumer products. In particular:

  1. Complexity – the characteristics of AI (those identified in the report include mutability, opacity, data needs, and autonomy) can translate into errors or challenges for AI systems that have the potential to cause harm. Further, there is often a need for integration or interoperability between AI products, leading to a complex supply chain, with many different economic operators directly or indirectly involved in product development, bringing increased complexity into the product lifecycle.
  2. Machine learning (ML) – ML models can give a product the ability to learn and change its actions on the basis of new data without human oversight, changing a products characteristics, including safety features and resulting in unpredictability.
  3. Robustness and predictability – challenges can occur because of the need for a significant amount of data to assist AI with decision making / functioning, and there is also a risk that biases may be inbuilt into a dataset used by AI to learn.
  4. Transparency and explainability – the complexity of AI, and the ML capabilities, can impact the ability to understand the reasons for an error or malfunction.
  5. Fairness and discrimination – if AI relies on biased data to aid its decision making, its behaviour could change from individual to individual, leading to discrimination (and possibly discrimination claims).
  • Product safety opportunities brought by AI

The report also considers the ways in which the incorporation of AI systems into manufactured consumer products can be of benefit. More specifically:

  1. Enhanced safety outcomes for consumers AI-led improvements in the design and manufacturing processes, and the use of AI in customer service (i.e. virtual assistants) to answer queries and provide recommendation on safe-usage to optimise product performance can ensure greater safety outcomes for consumers.
  2. prevention of product safety issues – products can provide real-life insights on product use and can give critical information to manufacturers on when a product embedded with AI might need repairs, before any safety issue arises.
  3. Preventing mass recalls – AI can enhance the data collection processes during industrial assembly enabling the discovery of non-conforming events on a product line, improving inspection, and monitoring post-purchase data to reduce the possibility of the need for a future recall.
  4. Protecting consumer safety and privacy – AI can be used to detect, analyse and prevent cyber-attacks.
  • Regulatory challenges resulting from AI driven consumer products

The report opines that the current legal framework is insufficient in many ways to deal with AI. In particular there are various shortcomings from a product safety / liability perspective:

  1. Definitions – It is not clear to what extent more complex AI systems fall within the existing definitions of product, producer and placing on the market, as well as the related concepts of safety, harm, damages, and defects. For example, the definition of “product” stipulated in the GPSR does not explicitly include or exclude software, leaving the position uncertain.
  2. Placing on the market – the current legislative focus on ensuring compliance at the point at which a product is placed on the market may no longer be sufficient / appropriate in situations where a product has the potential to change autonomously once in the hands of a consumer.
  3. Liability – the lack of transparency and explainability of AI models (i.e. the use of algorithms and ML) can impact the ability to understand reasons for an error or malfunction. If physical harm is caused, this has implications for assigning liability and may impact the ability for those that have suffered harm to obtain compensation. Further, the possibility of products undergoing changes after market placement, for example through software updates or ML produces a complex picture of liability – liability in such situations will be difficult to understand/ predict.
  4. Types of harm – AI consumer products may pose risks of immaterial harms (i.e. psychological harm or harm to one’s privacy and reputation) or indirect harms from cyber security vulnerabilities, which are not currently addressed in the GPSR.
  • Future outlook

The report notes that the hypothetical application of the UK’s product liability rules to AI products is a challenge, and that It remains unclear how product safety rules will apply to AI products.

At the moment, there are two core ways in which challenges brought by AI are being addressed:

  1. Standardisation –  AI standards could be developed by industry as a tool for self-regulation so that they can themselves define the requirements for product development. Standards may allow transparency and trust in the application of technologies, and at the same time support communication between all parties involved by using uniform terms and concepts.
  2. Industry and non-legislative approaches to tackling AI challenges – professional associations and consortia publish corresponding specifications or recommendations on AI. Many of the initiatives to tackle AI related challenges have been driven by industry, NGOs or consumer groups.

The inevitability of future AI developments is one of the factors driving likely reform at a UK level.

Author

Kate Corby is a partner in Baker McKenzie’s Dispute Resolution team in London. Kate has over two decades' experience of representing clients in complex litigation and arbitration, with a focus on arbitration of construction, engineering and infrastructure related disputes. She has handled arbitrations under the rules of all of the major arbitral institutions and ad hoc, seated in London and around the world and under a wide range of governing laws. Kate also has significant experience in advising on product liability, safety and regulatory compliance. Kate co-leads the firm's Industrials, Manufacturing and Transportation Industry Group in EMEA.
Kate is also well-known for her inclusion, diversity & equity work, particularly for organising the London chapter of #Arbitration Lunch Match, sitting on the Global Executive Committee of the Equal Representation of Experts Pledge, and she is co-chair of the London office's BakerWomen Affinity Group.
Kate is ranked as a Leading Individual in Legal 500 UK in both her practice areas in which she is described as “hugely impressive, extremely bright and on-the-ball, and has a deep understanding of the client’s needs and what really matters on the case. She is simply brilliant.” Kate is also individually ranked by Chambers, which notes she has “excellent commercial awareness and vision” and “provides excellent industry insight and customer service.” Kate is also recognised in Who’s Who Legal.

Author

Jo is a senior associate in Baker McKenzie's Dispute Resolution team in London. Jo advises clients in a wide range of industries on complex commercial disputes and investigations. She also regularly provides specialist product safety and regulatory compliance advice and acts for clients in product liability disputes. One of Jo's other areas of specialism is advising clients on a wide range of regulatory, public and administrative law issues, including judicial review, consultations, freedom of information and public procurement. Jo's practice often involves drawing on crisis management experience to help clients protect their reputations and shareholder value when dealing with urgent, time pressured issues and/or intense public scrutiny. Jo was ranked as a Next Generation Lawyer in the Legal 500 Product liability: defendant category in 2017. Jo has participated in the UK Government's Working Group on product safety and recalls and has assisted with the development of the Government's training programme for Trading Standards Officers on the new UK Code of Practice for Product Recalls.