Search for:

In brief

The use of algorithmic decision-making in recruitment to help improve the effectiveness and efficiency of process is unsurprisingly on the rise.  Put simply this technology can enable companies to review far greater numbers of applications at speed, and, in theory, allow for an unbiased approach to recruitment decision making.  However, as the UK Information Commissioner’s Office (ICO) has set out in its recent guidance on this topic, employers should take a critical and careful approach to when and how this technology is applied. If not applied carefully these tools can serve to actually exacerbate the inequalities that they are aiming to address, and could cause employers to fall foul of the UK’s equality and data protection legislation.

Contents

  1. ​​​​​​Key takeaways
  2. Background
  3. Is AI inclusive?
  4. The legal considerations
  5. Tips for employers

​​​​​​Key takeaways

Whilst government and regulators in the UK recognise the growing place of AI in recruitment, it is clear that they expect companies to take a considered and critical approach to when and how it is used. Companies should be clear on their approach from the outset (recording their analysis in a data protection impact assessment) and keep this under regular review as the technology (and their use of it) develops.

Background

In late 2020 the UK Government’s Centre for Data Ethics and Innovation (CDEI) released an in-depth report on algorithmic decision making, which was swiftly followed by guidance from the ICO on its particular uses in recruitment. AI technologies (such as algorithmic decision-making) are therefore a hot topic for government and regulators alike. Outside the UK, a group of U.S. senators also sent a joint letter to the chair of the U.S. Equal Employment Opportunity Commission, asking for greater oversite and enforcement for these technologies, highlighting that this is a global issue.

As the ICO outlines in its report, given the job losses arising from the COVID-19 pandemic there are likely to many more applicants for fewer roles. In addition, employers are looking at ways to save costs; as automation of standard processes also means a cheaper HR cost base, the use of technology to improve efficiency in recruitment is tempting for many organisations. Further, the Black Lives Matter movement and a growing global awareness of the importance of a diverse and inclusive workforce may also lead companies to turn to technology in an attempt to address issues of bias that have traditionally arisen through human-lead decision making.

Is AI inclusive?

Both the CDEI and ICO recognise why these tools may be appealing for employers and appreciate the potential that AI has for reducing discrimination and bias in recruitment by standardising processes and removing individual discretion.

However, as some early adopters of AI-lead decision-making have shown, improving the efficiency of a process using technology can often come at the cost of perpetuating existing bias or even creating new problems. For example, a credit card launched by a major technology company used an algorithm which was alleged to provide more credit to men than to women in comparable circumstances. In the recruitment sphere, software designed to sift CVs for another technology company was shown to disproportionately select against female candidates, even with corrections to ignore obvious references to gender (e.g. listing membership of a women’s football team on a CV).

The root of these problems is often the fact that algorithms are created using data from previous recruitment and successful candidates. This runs the risk of being shaped by pre-existing human bias to regard successful candidates (or creditworthy individuals) as those with certain characteristics that have been recruited in the past. In particular, machine learning tools often make correlations as part of their decision-making which (as they are usually unintuitive) are hard to detect and eliminate. For example, even where obvious references to gender were ignored by the CV sifting technology, it still made connections between implicitly gendered words and phrases (e.g. verbs that were correlated with men over women) and a candidate’s chances of success.

AI programs also often measure success against a defined norm, which could place those with a disability at a particular disadvantage. Someone with a speech impediment may, for example, perform less well in a video interview that is assessed or analysed using an AI program.

The legal considerations

Under UK data protection law, the first principle is that data must be processed in a way that is lawful, fair and transparent. The ICO has stated in its guidance that if an AI system has unjustified, adverse effects on individuals (including any discrimination against people who have a protected characteristic) this would not be fair processing and would be a breach of the law.

Biases in algorithms may also give rise to issues under the UK Equality Act 2010. Whilst these biases are unlikely to be directly discriminatory, it is possible that the algorithm has a disproportionately adverse impact on individuals who have a certain protected characteristic. For example, the CV sifting technology outlined above did not select against women because of their gender, but its CV sifting process did have the effect of disproportionately filtering out female candidates.

Whilst direct discrimination can almost never be justified, indirect discrimination is capable of justification where it amounts to a proportionate means of achieving a legitimate aim.  Companies may find it easy to identify the legitimate aim – in the recruitment context this is enabling the company to consider a far larger pool of candidates because of the speed with which the AI technology can review applications, and therefore get through greater volume.  However, companies may find it difficult to identify indirect discrimination. To address this employers who do rely on AI in recruitment should implement a process of regularly monitoring algorithms to assess if there is any disparate impact of particular protected characteristics, identifying what has caused that disparate impact, considering whether it can be justified, and if not, making any necessary adjustments. This process of regular monitoring and adjustment does involve additional time and investment but is likely in the end to mitigate risks of both discrimination and unfairness, and lead to better recruitment decision outcomes.

It is also worth noting that the scenarios where new technologies are most useful are potentially also where they carry the highest risks. Large-scale use of potentially biased automated decision-making tools could unwittingly expose an employer to a large number of discrimination claims before the issue is spotted and corrected as 1000s of applications may have already been processed using the technology.

Tips for employers

Despite the potential pitfalls, the ICO does not say the technology cannot be used. Instead it recommends that companies start by considering whether using AI is in fact a necessary and proportionate way of solving a problem they face. The benefits of automation would need to outweigh the potential issues that could arise in order for the use of this technology to be justified. This assessment is especially important given that fully automated decision-making in recruitment is likely to be prohibited under the GPDR. Employers will need to find a way to bring a human element to the decision-making process, which may counteract the efficiencies of an AI-lead process.

Even where employers do decide use of AI is necessary, they should consider from the outset how they will sufficiently mitigate any risk of discrimination or bias. The CDEI recommends, for example, collecting demographic data on applicants and using this to monitor whether any discriminatory or unfair patterns are emerging from the use of an algorithm. Collecting this data may, however, give rise to its own data protection issues or considerations and this process may be carried out too late to prevent early issues.

Employers should also consider whether candidates may need reasonable adjustments to be made to an AI recruitment process, just as they could also require for an in-person process. This could be an important step for employers in meeting their obligations under the Equality Act 2010.

Author

Monica Kurnatowska is a partner in the Firm's London office. She focuses on employment law and has been recognised by Chambers UK as a leading lawyer in her field. Monica is a regular speaker at internal and external seminars and workshops, and has written for a number of external publications on bonus issues, atypical workers, TUPE and outsourcing.

Author

Julia Wilson is a partner in Baker McKenzie's Employment & Compensation team in London and co-chair of the Firm's Workforce Redesign client solution. Julia also leads the employment data privacy practice in London. Julia advises multinational organisations on a wide range of employment and data protection matters. She is highly regarded by clients, who describe her as a “standout” performer who "knows how we think." A member of the Firm's Pro Bono Committee, she plays a lead role in the Firm's pro bono relationship with Save the Children International. She also collaborates with Law Works to deliver employment law training to solicitors who provide pro bono advice to individuals. Julia regularly presents and moderates panels on podcasts, webinars and in-person events, is often quoted in mainstream media, and authors articles and precedents for a range of industry and other publications.

Author

Joanna Kingstone is an Associate in Baker McKenzie London office.