United Kingdom: AI in recruitment - How inclusive is your algorithm?

In brief

The use of algorithmic decision-making in recruitment to help improve the effectiveness and efficiency of process is unsurprisingly on the rise.  Put simply this technology can enable companies to review far greater numbers of applications at speed, and, in theory, allow for an unbiased approach to recruitment decision making.  However, as the UK Information Commissioner’s Office (ICO) has set out in its recent guidance on this topic, employers should take a critical and careful approach to when and how this technology is applied. If not applied carefully these tools can serve to actually exacerbate the inequalities that they are aiming to address, and could cause employers to fall foul of the UK’s equality and data protection legislation.


​​​​​​Key takeaways

Whilst government and regulators in the UK recognise the growing place of AI in recruitment, it is clear that they expect companies to take a considered and critical approach to when and how it is used. Companies should be clear on their approach from the outset (recording their analysis in a data protection impact assessment) and keep this under regular review as the technology (and their use of it) develops. 

Background

In late 2020 the UK Government’s Centre for Data Ethics and Innovation (CDEI) released an in-depth report on algorithmic decision making, which was swiftly followed by guidance from the ICO on its particular uses in recruitment. AI technologies (such as algorithmic decision-making) are therefore a hot topic for government and regulators alike. Outside the UK, a group of U.S. senators also sent a joint letter to the chair of the U.S. Equal Employment Opportunity Commission, asking for greater oversite and enforcement for these technologies, highlighting that this is a global issue.

As the ICO outlines in its report, given the job losses arising from the COVID-19 pandemic there are likely to many more applicants for fewer roles. In addition, employers are looking at ways to save costs; as automation of standard processes also means a cheaper HR cost base, the use of technology to improve efficiency in recruitment is tempting for many organisations. Further, the Black Lives Matter movement and a growing global awareness of the importance of a diverse and inclusive workforce may also lead companies to turn to technology in an attempt to address issues of bias that have traditionally arisen through human-lead decision making. 

Is AI inclusive?

Both the CDEI and ICO recognise why these tools may be appealing for employers and appreciate the potential that AI has for reducing discrimination and bias in recruitment by standardising processes and removing individual discretion.

However, as some early adopters of AI-lead decision-making have shown, improving the efficiency of a process using technology can often come at the cost of perpetuating existing bias or even creating new problems. For example, a credit card launched by a major technology company used an algorithm which was alleged to provide more credit to men than to women in comparable circumstances. In the recruitment sphere, software designed to sift CVs for another technology company was shown to disproportionately select against female candidates, even with corrections to ignore obvious references to gender (e.g. listing membership of a women’s football team on a CV). 

The root of these problems is often the fact that algorithms are created using data from previous recruitment and successful candidates. This runs the risk of being shaped by pre-existing human bias to regard successful candidates (or creditworthy individuals) as those with certain characteristics that have been recruited in the past. In particular, machine learning tools often make correlations as part of their decision-making which (as they are usually unintuitive) are hard to detect and eliminate. For example, even where obvious references to gender were ignored by the CV sifting technology, it still made connections between implicitly gendered words and phrases (e.g. verbs that were correlated with men over women) and a candidate’s chances of success.

AI programs also often measure success against a defined norm, which could place those with a disability at a particular disadvantage. Someone with a speech impediment may, for example, perform less well in a video interview that is assessed or analysed using an AI program. 

The legal considerations

Under UK data protection law, the first principle is that data must be processed in a way that is lawful, fair and transparent. The ICO has stated in its guidance that if an AI system has unjustified, adverse effects on individuals (including any discrimination against people who have a protected characteristic) this would not be fair processing and would be a breach of the law.

Biases in algorithms may also give rise to issues under the UK Equality Act 2010. Whilst these biases are unlikely to be directly discriminatory, it is possible that the algorithm has a disproportionately adverse impact on individuals who have a certain protected characteristic. For example, the CV sifting technology outlined above did not select against women because of their gender, but its CV sifting process did have the effect of disproportionately filtering out female candidates.

Whilst direct discrimination can almost never be justified, indirect discrimination is capable of justification where it amounts to a proportionate means of achieving a legitimate aim.  Companies may find it easy to identify the legitimate aim - in the recruitment context this is enabling the company to consider a far larger pool of candidates because of the speed with which the AI technology can review applications, and therefore get through greater volume.  However, companies may find it difficult to identify indirect discrimination. To address this employers who do rely on AI in recruitment should implement a process of regularly monitoring algorithms to assess if there is any disparate impact of particular protected characteristics, identifying what has caused that disparate impact, considering whether it can be justified, and if not, making any necessary adjustments. This process of regular monitoring and adjustment does involve additional time and investment but is likely in the end to mitigate risks of both discrimination and unfairness, and lead to better recruitment decision outcomes.

It is also worth noting that the scenarios where new technologies are most useful are potentially also where they carry the highest risks. Large-scale use of potentially biased automated decision-making tools could unwittingly expose an employer to a large number of discrimination claims before the issue is spotted and corrected as 1000s of applications may have already been processed using the technology.

Tips for employers

Despite the potential pitfalls, the ICO does not say the technology cannot be used. Instead it recommends that companies start by considering whether using AI is in fact a necessary and proportionate way of solving a problem they face. The benefits of automation would need to outweigh the potential issues that could arise in order for the use of this technology to be justified. This assessment is especially important given that fully automated decision-making in recruitment is likely to be prohibited under the GPDR. Employers will need to find a way to bring a human element to the decision-making process, which may counteract the efficiencies of an AI-lead process.

Even where employers do decide use of AI is necessary, they should consider from the outset how they will sufficiently mitigate any risk of discrimination or bias. The CDEI recommends, for example, collecting demographic data on applicants and using this to monitor whether any discriminatory or unfair patterns are emerging from the use of an algorithm. Collecting this data may, however, give rise to its own data protection issues or considerations and this process may be carried out too late to prevent early issues.

Employers should also consider whether candidates may need reasonable adjustments to be made to an AI recruitment process, just as they could also require for an in-person process. This could be an important step for employers in meeting their obligations under the Equality Act 2010.

Contact Information

© 2021 Baker & McKenzie. Ownership: This site (Site) is a proprietary resource owned exclusively by Baker McKenzie (meaning Baker & McKenzie International and its member firms, including Baker & McKenzie LLP). Use of this site does not of itself create a contractual relationship, nor any attorney/client relationship, between Baker McKenzie and any person. Non-reliance and exclusion: All information on this Site is of general comment and for informational purposes only and may not reflect the most current legal and regulatory developments. All summaries of the laws, regulation and practice are subject to change. The information on this Site is not offered as legal or any other advice on any particular matter, whether it be legal, procedural or otherwise. It is not intended to be a substitute for reference to (and compliance with) the detailed provisions of applicable laws, rules, regulations or forms. Legal advice should always be sought before taking any action or refraining from taking any action based on any information provided in this Site. Baker McKenzie, the editors and the contributing authors do not guarantee the accuracy of the contents and expressly disclaim any and all liability to any person in respect of the consequences of anything done or permitted to be done or omitted to be done wholly or partly in reliance upon the whole or any part of the contents of this Site. Attorney Advertising: This Site may qualify as “Attorney Advertising” requiring notice in some jurisdictions. To the extent that this Site may qualify as Attorney Advertising, PRIOR RESULTS DO NOT GUARANTEE A SIMILAR OUTCOME. All rights reserved. The content of the this Site is protected under international copyright conventions. Reproduction of the content of this Site without express written authorization is strictly prohibited.