The Office for Product Safety and Standards (OPSS) published a report on 23 May 2022 which considered the impact of artificial intelligence (AI) on product safety. This issue is also being considered in a number of other jurisdictions (see, for example, the EU’s Proposal for a Regulation laying down harmonised rules on AI).
The report provides a framework for considering the impact of AI consumer products on existing product safety and liability policy. This framework seeks to support the work of policymakers by highlighting the main considerations that should be taken into account when evaluating and developing product safety and liability policy for AI consumer products. No timeline is stated in the report for that evaluation/ development to take place, but the report makes clear the view that work is needed to ensure the UK’s product safety and liability regime can deal with AI developments.
- Potential negative implications of AI
The report considers the potential negative implications of AI use on the safety of consumer products. In particular:
- Complexity – the characteristics of AI (those identified in the report include mutability, opacity, data needs, and autonomy) can translate into errors or challenges for AI systems that have the potential to cause harm. Further, there is often a need for integration or interoperability between AI products, leading to a complex supply chain, with many different economic operators directly or indirectly involved in product development, bringing increased complexity into the product lifecycle.
- Machine learning (ML) – ML models can give a product the ability to learn and change its actions on the basis of new data without human oversight, changing a products characteristics, including safety features and resulting in unpredictability.
- Robustness and predictability – challenges can occur because of the need for a significant amount of data to assist AI with decision making / functioning, and there is also a risk that biases may be inbuilt into a dataset used by AI to learn.
- Transparency and explainability – the complexity of AI, and the ML capabilities, can impact the ability to understand the reasons for an error or malfunction.
- Fairness and discrimination – if AI relies on biased data to aid its decision making, its behaviour could change from individual to individual, leading to discrimination (and possibly discrimination claims).
- Product safety opportunities brought by AI
The report also considers the ways in which the incorporation of AI systems into manufactured consumer products can be of benefit. More specifically:
- Enhanced safety outcomes for consumers AI-led improvements in the design and manufacturing processes, and the use of AI in customer service (i.e. virtual assistants) to answer queries and provide recommendation on safe-usage to optimise product performance can ensure greater safety outcomes for consumers.
- prevention of product safety issues – products can provide real-life insights on product use and can give critical information to manufacturers on when a product embedded with AI might need repairs, before any safety issue arises.
- Preventing mass recalls – AI can enhance the data collection processes during industrial assembly enabling the discovery of non-conforming events on a product line, improving inspection, and monitoring post-purchase data to reduce the possibility of the need for a future recall.
- Protecting consumer safety and privacy – AI can be used to detect, analyse and prevent cyber-attacks.
- Regulatory challenges resulting from AI driven consumer products
The report opines that the current legal framework is insufficient in many ways to deal with AI. In particular there are various shortcomings from a product safety / liability perspective:
- Definitions – It is not clear to what extent more complex AI systems fall within the existing definitions of product, producer and placing on the market, as well as the related concepts of safety, harm, damages, and defects. For example, the definition of “product” stipulated in the GPSR does not explicitly include or exclude software, leaving the position uncertain.
- Placing on the market – the current legislative focus on ensuring compliance at the point at which a product is placed on the market may no longer be sufficient / appropriate in situations where a product has the potential to change autonomously once in the hands of a consumer.
- Liability – the lack of transparency and explainability of AI models (i.e. the use of algorithms and ML) can impact the ability to understand reasons for an error or malfunction. If physical harm is caused, this has implications for assigning liability and may impact the ability for those that have suffered harm to obtain compensation. Further, the possibility of products undergoing changes after market placement, for example through software updates or ML produces a complex picture of liability – liability in such situations will be difficult to understand/ predict.
- Types of harm – AI consumer products may pose risks of immaterial harms (i.e. psychological harm or harm to one’s privacy and reputation) or indirect harms from cyber security vulnerabilities, which are not currently addressed in the GPSR.
- Future outlook
The report notes that the hypothetical application of the UK’s product liability rules to AI products is a challenge, and that It remains unclear how product safety rules will apply to AI products.
At the moment, there are two core ways in which challenges brought by AI are being addressed:
- Standardisation – AI standards could be developed by industry as a tool for self-regulation so that they can themselves define the requirements for product development. Standards may allow transparency and trust in the application of technologies, and at the same time support communication between all parties involved by using uniform terms and concepts.
- Industry and non-legislative approaches to tackling AI challenges – professional associations and consortia publish corresponding specifications or recommendations on AI. Many of the initiatives to tackle AI related challenges have been driven by industry, NGOs or consumer groups.
The inevitability of future AI developments is one of the factors driving likely reform at a UK level.