LONGER READ

FinTech

AI in Financial Services: Update on best practice for firms

An outline of the AI Public-Private Forum’s findings, in particular, the best practice firms can adopt in relation to its AI policies and procedures.

SHARE

Artificial intelligence (“AI”) has become increasingly crucial for UK firms due to the digital transformation of financial service providers in recent years. Our previous article examined the divergence between the UK and EU in AI regulation. In doing so, we considered the Bank of England (“BoE”) and the Financial Conduct Authority’s (“FCA”) joint AI Public-Private Forum (“AIPPF”) which was created to facilitate discussion regarding the use and impact of AI in financial services in the UK.

In this update, we have outlined the AIPPF’s findings, in particular, the best practice firms can adopt in relation to its AI policies and procedures.

What is the AIPPF?

The AIPPF was established by the BoE and the FCA and was tasked with deepening the understanding of AI and technology in financial services, including examining what AI regulation might be implemented in the future. Between 2020 and 2021, the AIPFF held numerous meetings attended by BoE and FCA representatives, as well as representatives from key organisations and banks, where the main aspects of AI in financial services were discussed and debated.

The AIPPF final report – key findings

The AIPPF’s Final Report, published on 17 February 2022 (“Report”), outlines the work and findings of the AIPPF over the previous 2 years. It made findings in three key areas: (1) data, (2) model risk, and (3) governance, and set out clear suggestions to firms for “good practice” in each of these areas which we have summarised below. UK firms utilising any form of AI are advised to consider the good practice suggestions and whether they can be incorporated and adopted into their AI strategy.

Good practice suggestions for firms
Data
  • Align data and AI strategy: Aim to coordinate data management and strategy with AI management and strategy.
  • Tracking data flows: Have processes in place for tracking and measuring data flows within the organisation.
  • Data usage audits: Carry out regular data audits and assessments of data usage.
  • Cost-benefit analysis of data: Undertake a clear assessment of the value of the data it holds and uses.
  • Provenance of data: Documentation of the provenance of data used by its AI models, especially in the case of third-party data.
  • Understand limitations of alternative data: Have a clear understanding of the limitations of alternative data[1].
Model risk
  • Policy for adoption of new AI applications: Have a documented and agreed AI review and sign-off process for all new AI applications.
  • Inventory: Have a complete inventory of all AI applications in use and in development.
  • Managing bias: Have clearly documented methods and processes for identifying and managing bias in inputs and outputs.
  • Regular assessments of performance: Complete regular assessments of AI application performance.
  • Explanation of risks: Have a clear explanation of AI application risks and mitigation.
  • Policy on inputs: Have documentation and assessment of AI application inputs, including data quality and suitability.
  • Assessment of interpretability: Have an appraisal process for its interpretability approaches for internal, regulatory, and consumer use. This requirement highlights the need for firms to be able to appropriately explain the inner workings of AI models.
  • Benefit vs complexity: Ensure that the benefits obtained from an AI application are commensurate with the complexity of the system.
  • Impact on consumers: Consider and measure the impact of AI applications on consumers.
Governance
  • Collaboration across teams: Strengthen contact between data science teams and risk teams from the early stages of the AI development cycle.
  • Central AI committee: Establish a central committee to oversee firm-wide development and use of AI.
  • Training: Provide AI training to ensure a sufficient level of skill throughout the organisation.
  • Firmwide good practice: Good practice should be shared across all levels and departments within the organisation.

 

AIPPF comments on the new EU rules

The only relevant remarks made by the AIPPF on the draft EU Regulation[2] (discussed in our previous article) were in relation to two aspects:

  • Definition of AI: In the draft EU Regulation the definition of AI is extremely broad. The AIPPF said that this included “statistical models and techniques that are not always considered to be AI”. On the other hand, the AIPPF defined AI more narrowly as “theory and development of computer systems able to perform tasks which require human intelligence”. This suggests that the definition of “AI” in any updated UK guidance will likely be more restrictive than the EU definition.
  • Focus on high-risk AI systems: The draft EU Regulation adopts a risk-based approach and has a narrow definition of high-risk AI systems which the AIPPF said “may be useful for other jurisdictions to consider.” While the AIPPF did not explicitly praise the EU’s risk-based approach, the Report did recommend that UK regulators provide guidance to firms which identifies high-risk systems. In that respect, it appears that the AIPPF was in agreement with this aspect of the draft EU Regulation.

What next for UK regulation?

The AIPPF did not make any recommendations in relation to new policy or regulation for AI – it stated that the regulatory response to AI is complex and there is a risk that regulation would be “too strict and too early”. Instead, the AIPPF stressed that many current general standards and regulations may be suitable, requiring only tweaks or clarification in order to apply to AI. It was implicit in the report that there is no intention, at least in the short term, for the implementation of a new set of AI specific rules to apply to UK firms. Rather, the AIPPF recommended that the FCA and PRA “provide greater clarity on existing regulation and policy”.

Until the regulators provide such further clarity, firms are advised to consider and, where possible, adopt the good practice principles which were outlined in the AIPPF Final Report. Doing so will place firms in the best position to respond to any further guidance or clarification which is expected from the regulators in the near future.

[1] This is referred to data that is “unstructured” and requires analytical techniques to transform it into meaningful information. Examples are images (e.g. satellite images) biometrics and telematics.

[2] EUR-Lex – 52021PC0206

Authors

Latestfromtheteam

MoreofAnna'sInsights

You are contacting

Anna Battams

Associate

anna.battams@collyerbristow.com



    Subscribe

    Please add your details and your areas of interest below

    Specialist sectors:

    Legal services:

    Other information:

    Jurisdictions of interest to you (other than UK):

    FINDING OUR ARTICLES OF INTEREST? SUBSCRIBE TO RECEIVE THE LATEST CONTENT DIRECT TO YOUR INBOX

    Subscribe now
    ExpandNeed some help?Toggle

    Get in touch

    Get in touch using our form below.