Longer Reads

Artificial Intelligence and the Workplace

AI is increasingly being used by HR specialists. In this article we explore some of the pitfalls and how best to avoid discrimination claims and making unfair decisions.

3 minute read

Published 2 August 2021

Authors

Share

Key information

Background

As technology continues to develop, many businesses have deployed artificial intelligence (AI) software to assist in various areas such as recruitment, training, employee monitoring, disciplinary processes, and dismissals. As UK legislation is playing catch-up with the technological developments, there is no particularised legislation to deal with this growing issue. Rather, an analysis of existing legal frameworks is required to see how AI interacts with employment law and the risks it poses for employers.

Before going further, it is worth defining AI and discussing briefly how it is created. AI does not have a single agreed definition but can broadly be captured by the idea of making machines more intelligent – the concept that a machine could work in the same way as a human but more efficiently and with better results. Whilst the endgame is for AI to act without human involvement, in creating an AI system, there is human input. To programme the system to draw conclusions from data, there must be an analysis and understanding of human thought processes and how they precede action. Then there must be a method of describing this analysis in the form of instructions for the AI system. Consequently, it is not surprising that sometimes human biases are coded into the software.

Discrimination Law

The Equality Act 2010 (EqA) protects employees from discrimination on the grounds of their protected characteristics such as sex, race, age, and disability. This area of employment law is already displaying some tensions in relation to AI. In March this year, Uber came under fire for its use of facial recognition software after there was evidence that it was not accurate with skin colour. The lack of recognition of non-white faces resulted in some Uber drivers being banned from the platform and losing their income streams. In America, there was an example of an AI system that assisted judges in determining sentencing. However, there was an issue with the initial data set the system had been given. This meant that the AI programme was twice as likely to falsely predict that black defendants would be repeat offenders – it seems the AI had become discriminatory!

Under UK law, system errors such as those described above would open employers up to a discrimination claim. If the AI system itself treats employees differently because of one of their protected characteristics this could result in a direct discrimination claim. A second form of discrimination employees are protected against is indirect discrimination. This broadly means that a provision, criterion or practice (PCP) put in place disadvantages an employee because of their protected characteristic. As an AI system is based upon an algorithm (i.e. a set of rules) this could be classified as a PCP and so give rise to an indirect discrimination claim.

In a recent 2020 paper called the Worker Experience Report, the Trades Union Congress (TUC) found that the implications of AI used in the workplace were unsurprisingly not understood by employees. Moreover, and worryingly, employers who had purchased AI products and implemented them for their businesses often had little understanding of the implications. Accordingly, employers should be very careful in deploying AI systems when they do not understand how the software works or face the risk of relying on an imperfect system which could result in discrimination claims.

Unfair Decisions

For a decision to dismiss to be fair it must be “within the range of reasonable responses”. This may have to be explained or justified by a human that has relied on data created by AI. Due to AI’s complexity the process resulting in this data is often inaccessible. If an employee is dismissed and they have not been informed of the data that has been used to come to the decision or how it has been weighted it is likely to be an unfair dismissal.

AI can make decisions that impact on employees’ livelihoods (e.g., performance reviews, disciplinary issues, and dismissals). A further consequence of this is that there could be a breach of the implied term of mutual trust and confidence. If AI is used to dismiss an employee, an employer may not be able to explain how the AI system came to its conclusion because it is too complex or there is a ‘black box’ issue. This term captures the inability of some AI software to explain the rules and logic followed in reaching its decisions. Employers are unlikely to be able to hide behind either the AI’s complexity or a black box issue to justify an inadequate explanation, particularly if an employee is disciplined or dismissed as a result.

The future and advice for employers

AI will continue to develop over the years and will likely outperform humans in some aspects of working life. However, the technology has wide implications and employers should be cautious. AI has two notable flaws: the human error of thinking AI is infallible and the lack of transparency in its outcomes. If employers are bringing in AI systems to assist with decision making, they should have stated limits on how it is used.

A further finding of the TUC in their 2020 paper the Worker Experience Report showed the lack of consultation with employees when AI systems were implemented. Employers should therefore involve employees at an early stage when deciding how AI should be best deployed in the business. Finally, employees should be able to access sufficient information about how the AI system is being used so they can be reassured that it is being utilised in a lawful, proportionate, and accurate way.

Related latest updates
PREV NEXT

Arrow Back to Insights

Longer Reads

Artificial Intelligence and the Workplace

AI is increasingly being used by HR specialists. In this article we explore some of the pitfalls and how best to avoid discrimination claims and making unfair decisions.

Published 2 August 2021

Associated sectors / services

Authors

Background

As technology continues to develop, many businesses have deployed artificial intelligence (AI) software to assist in various areas such as recruitment, training, employee monitoring, disciplinary processes, and dismissals. As UK legislation is playing catch-up with the technological developments, there is no particularised legislation to deal with this growing issue. Rather, an analysis of existing legal frameworks is required to see how AI interacts with employment law and the risks it poses for employers.

Before going further, it is worth defining AI and discussing briefly how it is created. AI does not have a single agreed definition but can broadly be captured by the idea of making machines more intelligent – the concept that a machine could work in the same way as a human but more efficiently and with better results. Whilst the endgame is for AI to act without human involvement, in creating an AI system, there is human input. To programme the system to draw conclusions from data, there must be an analysis and understanding of human thought processes and how they precede action. Then there must be a method of describing this analysis in the form of instructions for the AI system. Consequently, it is not surprising that sometimes human biases are coded into the software.

Discrimination Law

The Equality Act 2010 (EqA) protects employees from discrimination on the grounds of their protected characteristics such as sex, race, age, and disability. This area of employment law is already displaying some tensions in relation to AI. In March this year, Uber came under fire for its use of facial recognition software after there was evidence that it was not accurate with skin colour. The lack of recognition of non-white faces resulted in some Uber drivers being banned from the platform and losing their income streams. In America, there was an example of an AI system that assisted judges in determining sentencing. However, there was an issue with the initial data set the system had been given. This meant that the AI programme was twice as likely to falsely predict that black defendants would be repeat offenders – it seems the AI had become discriminatory!

Under UK law, system errors such as those described above would open employers up to a discrimination claim. If the AI system itself treats employees differently because of one of their protected characteristics this could result in a direct discrimination claim. A second form of discrimination employees are protected against is indirect discrimination. This broadly means that a provision, criterion or practice (PCP) put in place disadvantages an employee because of their protected characteristic. As an AI system is based upon an algorithm (i.e. a set of rules) this could be classified as a PCP and so give rise to an indirect discrimination claim.

In a recent 2020 paper called the Worker Experience Report, the Trades Union Congress (TUC) found that the implications of AI used in the workplace were unsurprisingly not understood by employees. Moreover, and worryingly, employers who had purchased AI products and implemented them for their businesses often had little understanding of the implications. Accordingly, employers should be very careful in deploying AI systems when they do not understand how the software works or face the risk of relying on an imperfect system which could result in discrimination claims.

Unfair Decisions

For a decision to dismiss to be fair it must be “within the range of reasonable responses”. This may have to be explained or justified by a human that has relied on data created by AI. Due to AI’s complexity the process resulting in this data is often inaccessible. If an employee is dismissed and they have not been informed of the data that has been used to come to the decision or how it has been weighted it is likely to be an unfair dismissal.

AI can make decisions that impact on employees’ livelihoods (e.g., performance reviews, disciplinary issues, and dismissals). A further consequence of this is that there could be a breach of the implied term of mutual trust and confidence. If AI is used to dismiss an employee, an employer may not be able to explain how the AI system came to its conclusion because it is too complex or there is a ‘black box’ issue. This term captures the inability of some AI software to explain the rules and logic followed in reaching its decisions. Employers are unlikely to be able to hide behind either the AI’s complexity or a black box issue to justify an inadequate explanation, particularly if an employee is disciplined or dismissed as a result.

The future and advice for employers

AI will continue to develop over the years and will likely outperform humans in some aspects of working life. However, the technology has wide implications and employers should be cautious. AI has two notable flaws: the human error of thinking AI is infallible and the lack of transparency in its outcomes. If employers are bringing in AI systems to assist with decision making, they should have stated limits on how it is used.

A further finding of the TUC in their 2020 paper the Worker Experience Report showed the lack of consultation with employees when AI systems were implemented. Employers should therefore involve employees at an early stage when deciding how AI should be best deployed in the business. Finally, employees should be able to access sufficient information about how the AI system is being used so they can be reassured that it is being utilised in a lawful, proportionate, and accurate way.

Associated sectors / services

Authors

Need some more information? Make an enquiry below.

    Subscribe

    Please add your details and your areas of interest below

    Specialist sectors:

    Legal services:

    Other information:

    Jurisdictions of interest to you (other than UK):



    Article contributor

    Enjoy reading our articles? why not subscribe to notifications so you’ll never miss one?

    Subscribe to our articles

    Message us on WhatsApp (calling not available)

    Please note that Collyer Bristow provides this service during office hours for general information and enquiries only and that no legal or other professional advice will be provided over the WhatsApp platform. Please also note that if you choose to use this platform your personal data is likely to be processed outside the UK and EEA, including in the US. Appropriate legal or other professional opinion should be taken before taking or omitting to take any action in respect of any specific problem. Collyer Bristow LLP accepts no liability for any loss or damage which may arise from reliance on information provided. All information will be deleted immediately upon completion of a conversation.

    I accept Close

    Close
    Scroll up
    ExpandNeed some help?Toggle

    < Back to menu

    I have an issue and need your help

    Scroll to see our A-Z list of expertise

    Get in touch

    Get in touch using our form below.



      Business Close
      Private Wealth Close
      Hot Topics Close