- Digital
- Employment law for employers
Longer Reads
Artificial intelligence (AI) and related technologies can permeate almost every facet of the employment relationship – from recruitment, line and performance management to decisions around who to dismiss or retain.
3 minute read
Published 18 April 2023
The world of work is changing fast, in part due to unprecedented technological advancements in recent years. This pace of change can be unsettling to some and, in many cases, neither the law nor businesses have been able to catch up quickly enough.
Artificial intelligence (AI) and related technologies can permeate almost every facet of the employment relationship – from recruitment, line and performance management to decisions around who to dismiss or retain.
While filtering tools in the recruitment industry, for example, have been used for decades, more sophisticated AI tools are now being deployed to search a candidate’s social media, screen application forms and CVs, or analyse biometric data such as tone of voice, body language and facial movements during interviews.
With the rise of remote working, advanced technologies have also increasingly featured in businesses’ arsenal of performance management and work allocation tools. These tools can monitor employees’ keystrokes, effectively distribute work between teams and even analyse possible causal factors driving performance.
Some employers, meanwhile, are utilising algorithms in the redundancy selection process, apps to report sexual harassment in the workplace, and AI tools to identify misconduct which could trigger disciplinary or dismissal processes.
Risks
As the boundaries of decision-making shift from the human towards the machine, however, employers embracing AI systems must carefully balance any commercial benefits against the legal, financial and reputational risks.
These risks can present themselves from the very beginning of the employment relationship. Employers may wish, for example, to use machine learning algorithms to target job adverts on social media in the hope of maximising their advertising budget.
The human engineer sets up the AI model, tells it what task to achieve (for example, targeting a job advert) and provides it with a vast data set. The AI model then uses this data to teach itself how to achieve the task.
At this stage, there is often little accountability or transparency about how the technology decides who sees the job advert and the risk of facilitating direct discrimination becomes palpable.
Research has shown, for example, that a gender-neutral STEM career advert run through Facebook’s algorithms was 20% more likely to be seen by men than women.[i]
The risk of bias is also present at later stages of the recruitment cycle. In 2017, Amazon abandoned an AI tool it had created to sift through CVs to reportedly ‘find the ‘ideal’ Amazon employee.
Using internal recruitment data (which showed that, historically, men tended to be recruited over women), the algorithm taught itself that male candidates were preferable to female candidates and so showed a persistent bias against the latter group.
Emotion recognition technology can also be used to process biometric data in AI-powered interviewing. The AI models use this data to “learn” how a successful candidate presents themselves. Most models, however, are trained using data from non-disabled people and so these technologies can disadvantage disabled and neurodiverse candidates.
The vast amount of data often needed by AI systems is necessarily obtained from the past and so employers easily run the risk of setting AI up to perpetuate historic biases.
Employers have a duty under the Equality Act 2010 not to discriminate, directly or indirectly, against an employee or candidate on the basis of a protected characteristic (PC), such as race, sex or disability.
Where an algorithmic decision appears to engage a PC, the burden of proof is likely to shift to the employer to prove the AI system is not tainted by discrimination, or to offer an objective justification of the outcome. This will be difficult if the employer does not fully understand how the algorithm works.
A report produced for the Trades Union Congress (TUC) has warned of the potential for flawed algorithms to make life-changing decisions about workers’ lives.[ii]
The report cited a case study in which a long-standing employee who, on a last warning due to several periods of unauthorised absence, was dismissed because an automated absence management system incorrectly processed a doctor’s fit note. The manager at the dismissal hearing assumed that the automated system was correct.
In these circumstances, an eligible employee would likely be able to bring an unfair dismissal claim.
Finally, employers should also be alert to several other considerations, including the impact of workplace surveyance tools on an employee’s right to privacy and the implied duty of trust and confidence, as well as an employer’s obligations under data protection legislation.
Practical considerations for employers
Whether the technology used is sophisticated or straightforward, employers must ensure that a human manager always has the final responsibility for any workplace decision.
This accountability cannot be passed onto the technology and employers using AI should ensure managers are upskilled so they can understand how the algorithms work and can transparently explain why a particular outcome has been reached.
AI technology is best implemented within a responsible AI governance system and in conjunction with early communication and consultation with employees.
The sparse legal and regulatory framework that governs the use of AI in the workplace is beginning to develop and organisations will need to be aware of any new obligations.
The EU’s proposed AI Act is the first law on AI by any major regulator and is likely, post-Brexit, to have an indirect effect on the UK.
Like the EU’s General Data Protection Regulation (GDPR), the EU AI Act is expected to become a global standard, defining what is expected of AI models and organisations that use them.
The UK is proposing a new regulatory approach of its own, with a White Paper released in March 2023, and employers would be well advised to keep a watchful eye on these developments.[iii]
For more information, please visit our Employment Lawyers page.
Sources:
[i] Algorithmic Bias? An Empirical Study into Apparent Gender-Based Discrimination in the Display of STEM Career Ads by Anja Lambrecht, Catherine E. Tucker :: SSRN
[ii] TUC and legal experts warn of “huge gaps” in British law over use of AI at work | TUC
[iii] AI regulation: a pro-innovation approach – GOV.UK (www.gov.uk)
Related content
Longer Reads
Artificial intelligence (AI) and related technologies can permeate almost every facet of the employment relationship – from recruitment, line and performance management to decisions around who to dismiss or retain.
Published 18 April 2023
The world of work is changing fast, in part due to unprecedented technological advancements in recent years. This pace of change can be unsettling to some and, in many cases, neither the law nor businesses have been able to catch up quickly enough.
Artificial intelligence (AI) and related technologies can permeate almost every facet of the employment relationship – from recruitment, line and performance management to decisions around who to dismiss or retain.
While filtering tools in the recruitment industry, for example, have been used for decades, more sophisticated AI tools are now being deployed to search a candidate’s social media, screen application forms and CVs, or analyse biometric data such as tone of voice, body language and facial movements during interviews.
With the rise of remote working, advanced technologies have also increasingly featured in businesses’ arsenal of performance management and work allocation tools. These tools can monitor employees’ keystrokes, effectively distribute work between teams and even analyse possible causal factors driving performance.
Some employers, meanwhile, are utilising algorithms in the redundancy selection process, apps to report sexual harassment in the workplace, and AI tools to identify misconduct which could trigger disciplinary or dismissal processes.
Risks
As the boundaries of decision-making shift from the human towards the machine, however, employers embracing AI systems must carefully balance any commercial benefits against the legal, financial and reputational risks.
These risks can present themselves from the very beginning of the employment relationship. Employers may wish, for example, to use machine learning algorithms to target job adverts on social media in the hope of maximising their advertising budget.
The human engineer sets up the AI model, tells it what task to achieve (for example, targeting a job advert) and provides it with a vast data set. The AI model then uses this data to teach itself how to achieve the task.
At this stage, there is often little accountability or transparency about how the technology decides who sees the job advert and the risk of facilitating direct discrimination becomes palpable.
Research has shown, for example, that a gender-neutral STEM career advert run through Facebook’s algorithms was 20% more likely to be seen by men than women.[i]
The risk of bias is also present at later stages of the recruitment cycle. In 2017, Amazon abandoned an AI tool it had created to sift through CVs to reportedly ‘find the ‘ideal’ Amazon employee.
Using internal recruitment data (which showed that, historically, men tended to be recruited over women), the algorithm taught itself that male candidates were preferable to female candidates and so showed a persistent bias against the latter group.
Emotion recognition technology can also be used to process biometric data in AI-powered interviewing. The AI models use this data to “learn” how a successful candidate presents themselves. Most models, however, are trained using data from non-disabled people and so these technologies can disadvantage disabled and neurodiverse candidates.
The vast amount of data often needed by AI systems is necessarily obtained from the past and so employers easily run the risk of setting AI up to perpetuate historic biases.
Employers have a duty under the Equality Act 2010 not to discriminate, directly or indirectly, against an employee or candidate on the basis of a protected characteristic (PC), such as race, sex or disability.
Where an algorithmic decision appears to engage a PC, the burden of proof is likely to shift to the employer to prove the AI system is not tainted by discrimination, or to offer an objective justification of the outcome. This will be difficult if the employer does not fully understand how the algorithm works.
A report produced for the Trades Union Congress (TUC) has warned of the potential for flawed algorithms to make life-changing decisions about workers’ lives.[ii]
The report cited a case study in which a long-standing employee who, on a last warning due to several periods of unauthorised absence, was dismissed because an automated absence management system incorrectly processed a doctor’s fit note. The manager at the dismissal hearing assumed that the automated system was correct.
In these circumstances, an eligible employee would likely be able to bring an unfair dismissal claim.
Finally, employers should also be alert to several other considerations, including the impact of workplace surveyance tools on an employee’s right to privacy and the implied duty of trust and confidence, as well as an employer’s obligations under data protection legislation.
Practical considerations for employers
Whether the technology used is sophisticated or straightforward, employers must ensure that a human manager always has the final responsibility for any workplace decision.
This accountability cannot be passed onto the technology and employers using AI should ensure managers are upskilled so they can understand how the algorithms work and can transparently explain why a particular outcome has been reached.
AI technology is best implemented within a responsible AI governance system and in conjunction with early communication and consultation with employees.
The sparse legal and regulatory framework that governs the use of AI in the workplace is beginning to develop and organisations will need to be aware of any new obligations.
The EU’s proposed AI Act is the first law on AI by any major regulator and is likely, post-Brexit, to have an indirect effect on the UK.
Like the EU’s General Data Protection Regulation (GDPR), the EU AI Act is expected to become a global standard, defining what is expected of AI models and organisations that use them.
The UK is proposing a new regulatory approach of its own, with a White Paper released in March 2023, and employers would be well advised to keep a watchful eye on these developments.[iii]
For more information, please visit our Employment Lawyers page.
Sources:
[i] Algorithmic Bias? An Empirical Study into Apparent Gender-Based Discrimination in the Display of STEM Career Ads by Anja Lambrecht, Catherine E. Tucker :: SSRN
[ii] TUC and legal experts warn of “huge gaps” in British law over use of AI at work | TUC
[iii] AI regulation: a pro-innovation approach – GOV.UK (www.gov.uk)
Need some more information? Make an enquiry below.
Subscribe
Please add your details and your areas of interest below
Article contributor
Associate
Specialising in Commercial and Corporate
Enjoy reading our articles? why not subscribe to notifications so you’ll never miss one?
Subscribe to our articlesPlease note that Collyer Bristow provides this service during office hours for general information and enquiries only and that no legal or other professional advice will be provided over the WhatsApp platform. Please also note that if you choose to use this platform your personal data is likely to be processed outside the UK and EEA, including in the US. Appropriate legal or other professional opinion should be taken before taking or omitting to take any action in respect of any specific problem. Collyer Bristow LLP accepts no liability for any loss or damage which may arise from reliance on information provided. All information will be deleted immediately upon completion of a conversation.
Close