Shorter Reads

AI regulation in financial services and how the UK diverges from EU rules

Artificial intelligence (AI) is growing rapidly in complexity and sophistication and is disrupting business processes around the world. Governments and regulatory bodies are struggling to develop frameworks which meet the ever-changing needs of the technology.

2 minute read

Published 21 January 2022

Share

Key information

  • Sectors
  • Financial services

AI regulation in the UK

There is no AI-specific legislation in the UK. Rather, all UK businesses must take into account various existing legal obligations when developing and using AI, just as they would in adopting any other new technology. There are three main pieces of legislation which UK organisations need to consider:

  • Data Protection Act 2018 — contains a number of the provisions which are relevant to AI, for example, the obligation to explain to individuals any automated decisions being made about them and the logic behind those decisions.
  •  Equality Act 2010 — prohibits discrimination on the basis of “protected characteristics”, e.g., age, disability, race, religion. Firms must therefore take steps to avoid discrimination and bias in AI systems.
  •  Human Rights Act 1998 — requires that AI systems treat people fairly.

Due to the digital transformation of financial service providers in recent years, AI has become increasingly crucial in the financial sector.

The regulators — the Prudential Regulation Authority (PRA) and the Financial Conduct Authority (FCA) — have not issued any AI specific guidance, but existing guidance such as the FCA’s Principles for Businesses (PRIN) and the PRA’s Fundamental Rules will be relevant to firms using AI. Under these rules:

  • AI systems must have sufficient interpretability and ensure they are covered by risk management systems (see FCA PRIN 3 and PRA Rules 5 and 6).
  • AI-assisted decisions must treat customers fairly (see FCA PRIN 6).
  • Firms must be transparent and be able to explain AI decision making to their customers (see FCA PRIN 7).

The new EU rules

In April 2021, the European Commission published its draft Artificial Intelligence Regulation (draft AI regulation) introducing a first-of-its-kind, comprehensive, regulatory framework for AI. The rules are still in draft form and will be the subject of complex  legislative negotiations, so the regulation is not expected to be finalised and implemented before 2023. Some important aspects of the draft AI regulation are, briefly:

Application

It will apply to all organisations providing or using AI systems in the EU. It does not apply to personal, non-professional activity.

Restrictions

There is a “risk-based” model to categorising AI systems:

  • Unacceptable risk — AI systems which are a clear threat to the safety, livelihoods and rights of individuals (e.g., AI systems designed to manipulate human behaviours) are prohibited.
  • High risk — there are 21 “high-risk” AI systems which are permitted, subject to various compliance requirements.
  • Lower risk — there are certain lower-risk AI systems, e.g., chatbots, which are permitted but will be subject to transparency obligations.
  • Minimal risk — all other AI systems can be developed and used without restriction.

Potential fines

Fines of 10 million-30 million euros or 2-6% of the global annual turnover of the organisation, whichever is higher, may be levied. The level of fine imposed depends on the nature of the infringement.

The new EU rules: extra-territorial impact

The draft AI regulation will apply to organisations outside of the EU if they are: (1) “providers” (i.e., a developer) of AI systems for organisations in the EU; or (2) “users” (i.e., organisations who buy the AI system from the provider) of AI systems where the output produced by the system is used within the EU.

This means that both UK-based AI developers who sell their products into the EU, and UK organisations using AI technology which affects individuals in the EU, will need to comply with the new rules. The extra-territorial impact is significant and multinationals will need to consider whether to use a common set of AI systems compliant with the draft AI regulation (once finalised) or adopt different technology in different jurisdictions.

Will there be similar AI-specific rules in the UK?

The UK government has not indicated any plans to follow in the Commission’s footsteps, nor has it expressed a view on what the future of AI regulation will look like.

In the financial sector, the Bank of England and the FCA have launched the AI Public-Private Forum (AIPPF) to facilitate discussion regarding the use and impact of AI in financial services. A particular aim is to gather views on the possible future AI regulatory
framework, and in particular to look at how the UK can “strike the right balance between providing a framework that allows for certainty, regulatory effectiveness and transparency, as well as beneficial innovation”. The AIPPF have discussed and evaluated the Commission’s draft AI regulation, so it is clear that the Commission rules will be closely considered by UK legislators and regulators in developing any new framework. The AIPPF did not, however, give any indication of when a new regulatory framework will be drafted
and implemented.

Unique regulatory challenges

The use of AI brings with it a number of unique regulatory challenges. The new AI framework in the EU is a major step forward, but it remains to be seen how the draft framework will develop during the Commission negotiation period and also how the UK will respond.

The complexity of AI and how it affects consumers, particularly in financial services, certainly supports the position that AI-specific legislation is warranted in the UK.

 

Originally published by ThomsonReuters © ThomsonReuters.

Related latest updates
PREV NEXT

Related content

Arrow Back to Insights

Shorter Reads

AI regulation in financial services and how the UK diverges from EU rules

Artificial intelligence (AI) is growing rapidly in complexity and sophistication and is disrupting business processes around the world. Governments and regulatory bodies are struggling to develop frameworks which meet the ever-changing needs of the technology.

Published 21 January 2022

Associated sectors / services

AI regulation in the UK

There is no AI-specific legislation in the UK. Rather, all UK businesses must take into account various existing legal obligations when developing and using AI, just as they would in adopting any other new technology. There are three main pieces of legislation which UK organisations need to consider:

  • Data Protection Act 2018 — contains a number of the provisions which are relevant to AI, for example, the obligation to explain to individuals any automated decisions being made about them and the logic behind those decisions.
  •  Equality Act 2010 — prohibits discrimination on the basis of “protected characteristics”, e.g., age, disability, race, religion. Firms must therefore take steps to avoid discrimination and bias in AI systems.
  •  Human Rights Act 1998 — requires that AI systems treat people fairly.

Due to the digital transformation of financial service providers in recent years, AI has become increasingly crucial in the financial sector.

The regulators — the Prudential Regulation Authority (PRA) and the Financial Conduct Authority (FCA) — have not issued any AI specific guidance, but existing guidance such as the FCA’s Principles for Businesses (PRIN) and the PRA’s Fundamental Rules will be relevant to firms using AI. Under these rules:

  • AI systems must have sufficient interpretability and ensure they are covered by risk management systems (see FCA PRIN 3 and PRA Rules 5 and 6).
  • AI-assisted decisions must treat customers fairly (see FCA PRIN 6).
  • Firms must be transparent and be able to explain AI decision making to their customers (see FCA PRIN 7).

The new EU rules

In April 2021, the European Commission published its draft Artificial Intelligence Regulation (draft AI regulation) introducing a first-of-its-kind, comprehensive, regulatory framework for AI. The rules are still in draft form and will be the subject of complex  legislative negotiations, so the regulation is not expected to be finalised and implemented before 2023. Some important aspects of the draft AI regulation are, briefly:

Application

It will apply to all organisations providing or using AI systems in the EU. It does not apply to personal, non-professional activity.

Restrictions

There is a “risk-based” model to categorising AI systems:

  • Unacceptable risk — AI systems which are a clear threat to the safety, livelihoods and rights of individuals (e.g., AI systems designed to manipulate human behaviours) are prohibited.
  • High risk — there are 21 “high-risk” AI systems which are permitted, subject to various compliance requirements.
  • Lower risk — there are certain lower-risk AI systems, e.g., chatbots, which are permitted but will be subject to transparency obligations.
  • Minimal risk — all other AI systems can be developed and used without restriction.

Potential fines

Fines of 10 million-30 million euros or 2-6% of the global annual turnover of the organisation, whichever is higher, may be levied. The level of fine imposed depends on the nature of the infringement.

The new EU rules: extra-territorial impact

The draft AI regulation will apply to organisations outside of the EU if they are: (1) “providers” (i.e., a developer) of AI systems for organisations in the EU; or (2) “users” (i.e., organisations who buy the AI system from the provider) of AI systems where the output produced by the system is used within the EU.

This means that both UK-based AI developers who sell their products into the EU, and UK organisations using AI technology which affects individuals in the EU, will need to comply with the new rules. The extra-territorial impact is significant and multinationals will need to consider whether to use a common set of AI systems compliant with the draft AI regulation (once finalised) or adopt different technology in different jurisdictions.

Will there be similar AI-specific rules in the UK?

The UK government has not indicated any plans to follow in the Commission’s footsteps, nor has it expressed a view on what the future of AI regulation will look like.

In the financial sector, the Bank of England and the FCA have launched the AI Public-Private Forum (AIPPF) to facilitate discussion regarding the use and impact of AI in financial services. A particular aim is to gather views on the possible future AI regulatory
framework, and in particular to look at how the UK can “strike the right balance between providing a framework that allows for certainty, regulatory effectiveness and transparency, as well as beneficial innovation”. The AIPPF have discussed and evaluated the Commission’s draft AI regulation, so it is clear that the Commission rules will be closely considered by UK legislators and regulators in developing any new framework. The AIPPF did not, however, give any indication of when a new regulatory framework will be drafted
and implemented.

Unique regulatory challenges

The use of AI brings with it a number of unique regulatory challenges. The new AI framework in the EU is a major step forward, but it remains to be seen how the draft framework will develop during the Commission negotiation period and also how the UK will respond.

The complexity of AI and how it affects consumers, particularly in financial services, certainly supports the position that AI-specific legislation is warranted in the UK.

 

Originally published by ThomsonReuters © ThomsonReuters.

Associated sectors / services

Need some more information? Make an enquiry below.

    Subscribe

    Please add your details and your areas of interest below

    Specialist sectors:

    Legal services:

    Other information:

    Jurisdictions of interest to you (other than UK):



    Enjoy reading our articles? why not subscribe to notifications so you’ll never miss one?

    Subscribe to our articles

    Message us on WhatsApp (calling not available)

    Please note that Collyer Bristow provides this service during office hours for general information and enquiries only and that no legal or other professional advice will be provided over the WhatsApp platform. Please also note that if you choose to use this platform your personal data is likely to be processed outside the UK and EEA, including in the US. Appropriate legal or other professional opinion should be taken before taking or omitting to take any action in respect of any specific problem. Collyer Bristow LLP accepts no liability for any loss or damage which may arise from reliance on information provided. All information will be deleted immediately upon completion of a conversation.

    I accept Close

    Close
    Scroll up
    ExpandNeed some help?Toggle

    < Back to menu

    I have an issue and need your help

    Scroll to see our A-Z list of expertise

    Get in touch

    Get in touch using our form below.



      Business Close
      Private Wealth Close
      Hot Topics Close