Shorter Reads

The risks of using AI in employment processes

First written for People Management, Tania Goodman and Patrick Kilgallon examine the minefield that is using technology to help make key people decisions.

1 minute read

Published 29 September 2021

Authors

Share

Key information

With no specific regulations in the UK, Tania Goodman and Patrick Kilgallon examine the minefield that is using technology to help make key people decisions.

As UK legislation continues to play catch-up with technological developments – there is still no specific regulation to deal with the growing use of artificial intelligence (AI) – employers must turn to existing legal frameworks to see how AI interacts with employment law and the risks it poses.

The Equality Act 2010 protects employees from discrimination on the grounds of protected characteristics, such as sex, race, age, and disability, and this area of employment law is already displaying some tensions in relation to AI.

In March this year, Uber came under fire for its use of facial recognition software after evidence showed it was not accurate with skin colour. The lack of recognition of non-white faces resulted in some Uber drivers being banned from the platform and losing their income streams.

In America, there was an example of an AI system that assisted judges in determining sentencing. However, there was an issue with the initial data set the system had been given, meaning that the AI programme was twice as likely to falsely predict that black defendants would be repeat offenders. It seemed that AI had become discriminatory.

Under UK law, system errors such as those described above would open employers up to a discrimination claim. If the AI system itself treats employees differently because of one of their protected characteristics this could result in a direct discrimination claim.

Employees are also protected against indirect discrimination, broadly arising when a ‘provision, criterion or practice’ put in place by an employer disadvantages an employee because of their protected characteristic. As an AI system is based upon an algorithm or set of rules, this could be classified as a ‘provision, criterion or practice’ and give rise to an indirect discrimination claim.

Unfair decisions

For a decision to dismiss an employee to be fair it must, according to UK employment legislation, be ‘within the range of reasonable responses’ and, if needed, be explained or justified by a person. If that decision is driven by complex AI algorithms, that data may be inaccessible or difficult to explain.

If an employee is dismissed and they have not been informed of the data that has been used to come to the decision or how it has been weighted, it is likely to be an unfair dismissal.

AI is increasingly used to make decisions that impact on employees’ livelihoods; for example, performance reviews, disciplinary issues and dismissal.

Black box issues, where it proves difficult or impossible to explain the rules and logic applied by an AI software decision can cause employers very real problems. Employers cannot hide behind such issues as it leaves them at risk of claims for unfair or constructive dismissal.

The future and advice for employers

AI will continue to develop and will likely outperform humans in some aspects of working life. However, the technology has wide implications and employers should be cautious. AI has two notable flaws: the human error of thinking AI is infallible and the lack of transparency in its outcomes.

If employers are bringing in AI systems to assist with decision making, they should have stated limits on how it is used, involve employees at an early stage when deciding how AI should be best deployed in the business, and give employees sufficient information about how the system is being used so they can be reassured that it is being utilised in a lawful, proportionate, and accurate way.

 

First written for People Management, published 28/09/2021

Related latest updates
PREV NEXT

Arrow Back to Insights

Shorter Reads

The risks of using AI in employment processes

First written for People Management, Tania Goodman and Patrick Kilgallon examine the minefield that is using technology to help make key people decisions.

Published 29 September 2021

Associated sectors / services

Authors

With no specific regulations in the UK, Tania Goodman and Patrick Kilgallon examine the minefield that is using technology to help make key people decisions.

As UK legislation continues to play catch-up with technological developments – there is still no specific regulation to deal with the growing use of artificial intelligence (AI) – employers must turn to existing legal frameworks to see how AI interacts with employment law and the risks it poses.

The Equality Act 2010 protects employees from discrimination on the grounds of protected characteristics, such as sex, race, age, and disability, and this area of employment law is already displaying some tensions in relation to AI.

In March this year, Uber came under fire for its use of facial recognition software after evidence showed it was not accurate with skin colour. The lack of recognition of non-white faces resulted in some Uber drivers being banned from the platform and losing their income streams.

In America, there was an example of an AI system that assisted judges in determining sentencing. However, there was an issue with the initial data set the system had been given, meaning that the AI programme was twice as likely to falsely predict that black defendants would be repeat offenders. It seemed that AI had become discriminatory.

Under UK law, system errors such as those described above would open employers up to a discrimination claim. If the AI system itself treats employees differently because of one of their protected characteristics this could result in a direct discrimination claim.

Employees are also protected against indirect discrimination, broadly arising when a ‘provision, criterion or practice’ put in place by an employer disadvantages an employee because of their protected characteristic. As an AI system is based upon an algorithm or set of rules, this could be classified as a ‘provision, criterion or practice’ and give rise to an indirect discrimination claim.

Unfair decisions

For a decision to dismiss an employee to be fair it must, according to UK employment legislation, be ‘within the range of reasonable responses’ and, if needed, be explained or justified by a person. If that decision is driven by complex AI algorithms, that data may be inaccessible or difficult to explain.

If an employee is dismissed and they have not been informed of the data that has been used to come to the decision or how it has been weighted, it is likely to be an unfair dismissal.

AI is increasingly used to make decisions that impact on employees’ livelihoods; for example, performance reviews, disciplinary issues and dismissal.

Black box issues, where it proves difficult or impossible to explain the rules and logic applied by an AI software decision can cause employers very real problems. Employers cannot hide behind such issues as it leaves them at risk of claims for unfair or constructive dismissal.

The future and advice for employers

AI will continue to develop and will likely outperform humans in some aspects of working life. However, the technology has wide implications and employers should be cautious. AI has two notable flaws: the human error of thinking AI is infallible and the lack of transparency in its outcomes.

If employers are bringing in AI systems to assist with decision making, they should have stated limits on how it is used, involve employees at an early stage when deciding how AI should be best deployed in the business, and give employees sufficient information about how the system is being used so they can be reassured that it is being utilised in a lawful, proportionate, and accurate way.

 

First written for People Management, published 28/09/2021

Associated sectors / services

Authors

Need some more information? Make an enquiry below.

    Subscribe

    Please add your details and your areas of interest below

    Specialist sectors:

    Legal services:

    Other information:

    Jurisdictions of interest to you (other than UK):



    Article contributor

    Enjoy reading our articles? why not subscribe to notifications so you’ll never miss one?

    Subscribe to our articles

    Message us on WhatsApp (calling not available)

    Please note that Collyer Bristow provides this service during office hours for general information and enquiries only and that no legal or other professional advice will be provided over the WhatsApp platform. Please also note that if you choose to use this platform your personal data is likely to be processed outside the UK and EEA, including in the US. Appropriate legal or other professional opinion should be taken before taking or omitting to take any action in respect of any specific problem. Collyer Bristow LLP accepts no liability for any loss or damage which may arise from reliance on information provided. All information will be deleted immediately upon completion of a conversation.

    I accept Close

    Close
    Scroll up
    ExpandNeed some help?Toggle

    < Back to menu

    I have an issue and need your help

    Scroll to see our A-Z list of expertise

    Get in touch

    Get in touch using our form below.



      Business Close
      Private Wealth Close
      Hot Topics Close