- Digital
- Intellectual property
- Digital
Longer Reads
While AI technology has the potential to deliver very significant benefits, it is clear that there are equally important risks that need to be recognised and mitigated.
6 minute read
Published 11 September 2023
Introduction
The use of Artificial Intelligence (AI) technology in almost every area of our lives has grown exponentially in recent times. While such technology has the potential to deliver very significant benefits, it is also increasingly clear that there are equally important risks that need to be recognised and (so far as possible) mitigated. The field is too important to leave risk mitigation entirely in the hands of AI developers; a degree of central regulation is widely accepted as necessary. The UK government has recognised the tension between the desire of the business community to engage with speedy AI innovation and the regulatory burdens which are commonly perceived to impede progress.
On 29 March 2023, the Secretary of State for Science, Innovation and Technology published a White Paper entitled “A Pro-Innovation Approach to AI Regulation” (the “White Paper”(1)) which sets out the government’s proposed framework (the “Framework”) for providing essential guidance for AI regulation without obstructing development.
Many organisations and institutions have responded to the White Paper, including the Equality and Human Rights Commission, the British Computer Society, the Law Society, and the Association of Chartered Certified Accountants (the “ACCA”) (in conjunction with Ernst and Young Global Limited). These responses have now been published.
This article will examine the current status of AI Regulation in the UK and the White Paper’s key proposals for improvement with particular reference to the observations of the Law Society and the ACCA.
The need for AI regulation
AI has the potential to deliver transformative advances in the medical, technological, and scientific spheres, as well everyday life activities.
However, the growth of AI has intensified widespread risks, ranging from social manipulation and disinformation (such as deep fakes) to concerns over privacy breaches. In the legal and accountancy sectors, concerns relating to liability and transparency have heightened against the soaring popularity of programs such as ChatGPT.
AI risk management is a discussion that has long polarised public and professional opinion. In early 2023, Geoffrey Hinton, the ‘godfather of AI’, triggered public anxiety expressing concerns that AI would become uncontrollable(2). In contrast, Oren Etzioni, CEO of the Allen Institute for Artificial Intelligence and Professor of Computer Science at the University of Washington, argued that “doom-and-gloom predictions often fail to consider the potential benefits of AI”(3).
Presently, there is no legislation in England and Wales that regulates AI. Instead, developers are mostly regulated by non-statutory government guidance and existing regulators. In contrast, the EU intends to pass the ‘Artificial Intelligence Act’ (the “Act”). The proposed provisions of the Act were set out in a proposal published by the European Commission on 21 April 2021(4) (the “EU Proposal”).
The EU Proposal sets out the four objectives of the Act which are 1) to set requirements specific to AI systems to ensure they are safe and respect existing law on fundamental rights, 2) to ensure legal certainty by ensuring there is clarity and conformity as to the requirements to use AI systems, 3) to enhance governance and effective enforcement on existing laws on fundamental rights and safety requirements, and 4) to facilitate the development of a single market for lawful, safe, and trustworthy AI. The EU Proposal confirms that the Act will include a list of prohibited AI which, using a risk-based approach, will prohibit AI systems whose use is considered “unacceptable”, such as those that have the potential to manipulate or exploit vulnerable groups.
In June 2023, EU lawmakers agreed to amend the draft legislation to ban biometric identification surveillance systems. Further, generative AI programs such as ChatGPT or ‘deep fake’ applications would be legally obligated to disclose that the content was AI generated (5) (6).
There have been calls for the government to introduce similar safeguards before AI develops too far. In March 2023, more than 1,000 AI experts signed an open letter calling on AI developers to, inter-alia, work with policy makers to implement “new and capable regulatory authorities dedicated to AI(7)”.
The balance between adequate and appropriate regulation that addresses and minimises risk, whilst allowing AI to thrive, is thus crucial.
Summary of the White Paper
As its title suggests, the White Paper reflects the government’s ambitions to be a global leader in AI innovation, with relatively light-touch regulatory control. In particular, the Government is not proposing to establish a new AI regulatory body, but instead to expand the remit of existing regulators to cover AI development as well.
Developer responsibility and risk management is the thread of the publication and the crucial relationship between public trust in AI and adequate regulation is highlighted.
The Framework is built on five principles (the “Principles”):
– Principle 1: safety, security, and robustness;
– Principle 2: appropriate transparency and explainability;
– Principle 3: fairness;
– Principle 4: accountability and governance; and
– Principle 5: contestability and redress.
It is intended that the Principles will complement each other to increase public trust and transparency whilst encouraging AI development within a safe and accountable structure. The Principles exist so that a consistent cross-sector approach can be adopted by different regulators.
The White Paper confirms that the government will not be legislating how businesses use AI on the basis that statutory restrictions may hinder and delay AI innovation. Instead, the Framework and its Principles will be implemented and led by existing regulators who may, in the future, be bound by a statutory duty to do the same.
Whilst there is an attempt to reassure regulators that there will be collaboration with, and support from, government departments, the government has seemingly left regulators with little guidance about how this will work in practice. The White Paper recommends that regulators issue their own guidance to businesses and execute their own measures and tools to implement the Principles. There is clear potential for different regulators to take different approaches, which is a concern.
The Law Society’s response
The Law Society is the independent professional body for solicitors in England and Wales. Its response to the White Paper was published on 27 June 2023(8).
The response welcomes the possible increase in access to justice that AI may bring whilst seeking to address their concerns over the White Paper’s limitations.
Crucially, the Law Society have raised accountability and liability concerns in respect of high-risk or dangerous AI functions. A request has been made for explicit regulations which clearly set out the assignment of liability of AI outcomes. The regulator does not shy away from criticising the government’s ‘soft law’ approach and calls for stringent legislation that focuses on high-risk contexts. Indeed, the Law Society points to the hard law-based approaches of the EU and the US and notes that international alignment is required in circumstances where law firms may operate in multiple jurisdictions. In particular, legal definitions of terminology used in the White Paper (such as ‘transparency’) are needed so that businesses and legal professionals can understand what their responsibilities are. The EU Proposal intends to introduce statutory definitions complemented by a list of specific techniques and approaches to ensure legal certainty. Further, without legislation, there will be inadequate redress for AI related harms. Consequently, there needs to be statutory clarification as to the legal recourses and mechanisms that are available to challenge AI’s actions, and the remedies that are available. In response to the possible statutory duty on regulators, the Law Society maintains that any legislation should not be overly restrictive in circumstances where the rapid advancement of AI requires flexibility. Nonetheless, clarity should be at the forefront of any legislation.
Most notably, the Law Society has recommended that policy makers introduce an expert ‘AI Officer’ who would possess the necessary skillset and resources to scrutinise and, if necessary, override an AI output. The AI Officer could advise law firms and solicitors on the deployment of AI systems and the relevant regulatory requirements. While an interesting proposal, the government department where such an individual would sit would need to be established, together with an adequate budget.
In conjunction with this, law firms will need to deliver AI training to ‘upskill’ their employees so that they can benefit from advanced technology whilst remaining aware of the associated risks and responsibilities. Regulators should also receive in-depth AI training so that they can understand how AI works and how it can be regulated.
The White Paper is further criticised by the Law Society by raising the inconsistencies between the (relatively) robust clarity of the GDPR and the vagueness and ambiguity of the White Paper. For example, under GDPR individuals have the right to request human intervention to challenge AI decisions. However, the White Paper merely states that “regulators will need to consider the suitability of requiring AI system operators to provide an appropriate justification” for a decision (our emphasis). The Law Society is quick to point out that this wide discretion may cause cross-sector uncertainty.
The ACCA’s response
The ACCA is the global professional body for professional accountants. Its response to the White Paper was published on 27 July 2023(9).
The ACCA’s response supports the White Paper and the Principles, which are deemed “well suited” to a cross-sector approach. However, the potential practical difficulties in adopting the Principles when different sectors have different regulatory environments is raised. For example, the financial services and legal sectors have stringent regulatory requirements which are very visibly enforced, whilst other sectors, like recruitment, are not widely regulated. To combat this, the ACCA calls for prescriptive sector-specific guidance, with further comment on the support to be provided to sectors navigating existing regulation, such as the Equality Act 2010, the Unfair Trading Consumer Protections Act 2008, and the Modern Slavery Act, that indirectly impacts the use of AI.
The ACCA echoes the Law Society’s concerns over the government’s reluctancy to incorporate the Principles in legislation and calls for refinement of the approach to ensure SMEs are not inadvertently disadvantaged. For example, unlike large firms who may even have in-house regulatory advisors, SMEs do not have the resources or expertise to commit to AI systems without certainty over its regulation. The ACCA highlights that SMEs drive innovation and have the potential to bring about transformative industry-wide change. It is therefore important that any regulation does not create a monopoly and stifle innovation originating from SMEs.
Finally, since many of ACCA’s stakeholders operate internationally, there is a concern that a lack of statutory clarity may limit harmonisation across jurisdictions. Businesses may opt to comply with more prescriptive international legislation to limit the risk of non-compliance. If the UK wishes to remain a global leader in AI, then the ACCA recommends it aligns itself with international jurisdictions and provide greater regulatory certainty to support cross-border operations, or it risks businesses opting for an alternative regime to limit exposure, save costs and simplify internal procedures. In the absence of a centralised regulatory entity, the ACCA recommend that the government provides sufficient training to board members in data governance, AI systems and ethics so that companies can undertake their own AI risk assessments.
Concluding thoughts
The rapid growth of AI and its accompanying uncertainty is something that both the government and regulators will need to keep a keen under continuing review. As both the technology and range of applications change, regulators will need to remain nimble to endure that regulation remains appropriate to the task.
Training of staff involved in development, regulation and implementation of AI systems appears to be crucial. There are obvious cost and time implications for such measures, but without them, there are increased risks of harm being caused by ‘unintended consequences’ of AI technology implementation.
There is an obvious lack of regulatory certainty: however, this must be balanced with an essential degree of flexibility. Professionals should not be ‘left to guess’ what their responsibilities and expectations are; without clearly defined regulations, firms and individuals may be hesitant to engage with AI over fears of liability.
The lack of clarity in the White Paper may lead to the adoption and interpretation of different definitions by different (or even the same) regulators, which will cause inconsistent practices. The unpredictability of AI also requires both local and international harmonisation, and the government will need to provide adequate resources to regulatory bodies to handle their new duties.
It remains to be seen whether the proposed regulatory framework is sufficiently robust to attract investors and innovators to the UK and drive the responsible innovation the UK Government aims to protect.
(1) A pro-innovation approach to AI regulation – GOV.UK (www.gov.uk)
(2) Godfather of AI’ Geoffrey Hinton quits Google and warns over dangers of misinformation | Google | The Guardian
(3) No, the Experts Don’t Think Superintelligent AI is a Threat to Humanity | MIT Technology Review
(4) European Commission ‘Proposal For A Regulation Of The European Parliament And Of The Council Laying Down Harmonised Rules On Artificial Intelligence (Artificial Intelligence Act) And Amending Certain Union Legislative Acts.
Brussels, 21.4.2021 COM(2021) 206 final 2021/0106 (COD) {SEC(2021) 167 final} – {SWD(2021) 84 final} – {SWD(2021) 85 final}
(5) EU AI Act: first regulation on artificial intelligence | News | European Parliament (europa.eu)
(6) EU lawmakers vote for tougher AI rules as draft moves to final stage | Reuters
(7) Pause Giant AI Experiments: An Open Letter – Future of Life Institute
(8) A pro-innovation approach to AI regulation – Law Society response | The Law Society; Law Society response to UK Government white paper: A pro-innovation approach to AI regulation June 2023 (pdf)
Related content
Longer Reads
While AI technology has the potential to deliver very significant benefits, it is clear that there are equally important risks that need to be recognised and mitigated.
Published 11 September 2023
Introduction
The use of Artificial Intelligence (AI) technology in almost every area of our lives has grown exponentially in recent times. While such technology has the potential to deliver very significant benefits, it is also increasingly clear that there are equally important risks that need to be recognised and (so far as possible) mitigated. The field is too important to leave risk mitigation entirely in the hands of AI developers; a degree of central regulation is widely accepted as necessary. The UK government has recognised the tension between the desire of the business community to engage with speedy AI innovation and the regulatory burdens which are commonly perceived to impede progress.
On 29 March 2023, the Secretary of State for Science, Innovation and Technology published a White Paper entitled “A Pro-Innovation Approach to AI Regulation” (the “White Paper”(1)) which sets out the government’s proposed framework (the “Framework”) for providing essential guidance for AI regulation without obstructing development.
Many organisations and institutions have responded to the White Paper, including the Equality and Human Rights Commission, the British Computer Society, the Law Society, and the Association of Chartered Certified Accountants (the “ACCA”) (in conjunction with Ernst and Young Global Limited). These responses have now been published.
This article will examine the current status of AI Regulation in the UK and the White Paper’s key proposals for improvement with particular reference to the observations of the Law Society and the ACCA.
The need for AI regulation
AI has the potential to deliver transformative advances in the medical, technological, and scientific spheres, as well everyday life activities.
However, the growth of AI has intensified widespread risks, ranging from social manipulation and disinformation (such as deep fakes) to concerns over privacy breaches. In the legal and accountancy sectors, concerns relating to liability and transparency have heightened against the soaring popularity of programs such as ChatGPT.
AI risk management is a discussion that has long polarised public and professional opinion. In early 2023, Geoffrey Hinton, the ‘godfather of AI’, triggered public anxiety expressing concerns that AI would become uncontrollable(2). In contrast, Oren Etzioni, CEO of the Allen Institute for Artificial Intelligence and Professor of Computer Science at the University of Washington, argued that “doom-and-gloom predictions often fail to consider the potential benefits of AI”(3).
Presently, there is no legislation in England and Wales that regulates AI. Instead, developers are mostly regulated by non-statutory government guidance and existing regulators. In contrast, the EU intends to pass the ‘Artificial Intelligence Act’ (the “Act”). The proposed provisions of the Act were set out in a proposal published by the European Commission on 21 April 2021(4) (the “EU Proposal”).
The EU Proposal sets out the four objectives of the Act which are 1) to set requirements specific to AI systems to ensure they are safe and respect existing law on fundamental rights, 2) to ensure legal certainty by ensuring there is clarity and conformity as to the requirements to use AI systems, 3) to enhance governance and effective enforcement on existing laws on fundamental rights and safety requirements, and 4) to facilitate the development of a single market for lawful, safe, and trustworthy AI. The EU Proposal confirms that the Act will include a list of prohibited AI which, using a risk-based approach, will prohibit AI systems whose use is considered “unacceptable”, such as those that have the potential to manipulate or exploit vulnerable groups.
In June 2023, EU lawmakers agreed to amend the draft legislation to ban biometric identification surveillance systems. Further, generative AI programs such as ChatGPT or ‘deep fake’ applications would be legally obligated to disclose that the content was AI generated (5) (6).
There have been calls for the government to introduce similar safeguards before AI develops too far. In March 2023, more than 1,000 AI experts signed an open letter calling on AI developers to, inter-alia, work with policy makers to implement “new and capable regulatory authorities dedicated to AI(7)”.
The balance between adequate and appropriate regulation that addresses and minimises risk, whilst allowing AI to thrive, is thus crucial.
Summary of the White Paper
As its title suggests, the White Paper reflects the government’s ambitions to be a global leader in AI innovation, with relatively light-touch regulatory control. In particular, the Government is not proposing to establish a new AI regulatory body, but instead to expand the remit of existing regulators to cover AI development as well.
Developer responsibility and risk management is the thread of the publication and the crucial relationship between public trust in AI and adequate regulation is highlighted.
The Framework is built on five principles (the “Principles”):
– Principle 1: safety, security, and robustness;
– Principle 2: appropriate transparency and explainability;
– Principle 3: fairness;
– Principle 4: accountability and governance; and
– Principle 5: contestability and redress.
It is intended that the Principles will complement each other to increase public trust and transparency whilst encouraging AI development within a safe and accountable structure. The Principles exist so that a consistent cross-sector approach can be adopted by different regulators.
The White Paper confirms that the government will not be legislating how businesses use AI on the basis that statutory restrictions may hinder and delay AI innovation. Instead, the Framework and its Principles will be implemented and led by existing regulators who may, in the future, be bound by a statutory duty to do the same.
Whilst there is an attempt to reassure regulators that there will be collaboration with, and support from, government departments, the government has seemingly left regulators with little guidance about how this will work in practice. The White Paper recommends that regulators issue their own guidance to businesses and execute their own measures and tools to implement the Principles. There is clear potential for different regulators to take different approaches, which is a concern.
The Law Society’s response
The Law Society is the independent professional body for solicitors in England and Wales. Its response to the White Paper was published on 27 June 2023(8).
The response welcomes the possible increase in access to justice that AI may bring whilst seeking to address their concerns over the White Paper’s limitations.
Crucially, the Law Society have raised accountability and liability concerns in respect of high-risk or dangerous AI functions. A request has been made for explicit regulations which clearly set out the assignment of liability of AI outcomes. The regulator does not shy away from criticising the government’s ‘soft law’ approach and calls for stringent legislation that focuses on high-risk contexts. Indeed, the Law Society points to the hard law-based approaches of the EU and the US and notes that international alignment is required in circumstances where law firms may operate in multiple jurisdictions. In particular, legal definitions of terminology used in the White Paper (such as ‘transparency’) are needed so that businesses and legal professionals can understand what their responsibilities are. The EU Proposal intends to introduce statutory definitions complemented by a list of specific techniques and approaches to ensure legal certainty. Further, without legislation, there will be inadequate redress for AI related harms. Consequently, there needs to be statutory clarification as to the legal recourses and mechanisms that are available to challenge AI’s actions, and the remedies that are available. In response to the possible statutory duty on regulators, the Law Society maintains that any legislation should not be overly restrictive in circumstances where the rapid advancement of AI requires flexibility. Nonetheless, clarity should be at the forefront of any legislation.
Most notably, the Law Society has recommended that policy makers introduce an expert ‘AI Officer’ who would possess the necessary skillset and resources to scrutinise and, if necessary, override an AI output. The AI Officer could advise law firms and solicitors on the deployment of AI systems and the relevant regulatory requirements. While an interesting proposal, the government department where such an individual would sit would need to be established, together with an adequate budget.
In conjunction with this, law firms will need to deliver AI training to ‘upskill’ their employees so that they can benefit from advanced technology whilst remaining aware of the associated risks and responsibilities. Regulators should also receive in-depth AI training so that they can understand how AI works and how it can be regulated.
The White Paper is further criticised by the Law Society by raising the inconsistencies between the (relatively) robust clarity of the GDPR and the vagueness and ambiguity of the White Paper. For example, under GDPR individuals have the right to request human intervention to challenge AI decisions. However, the White Paper merely states that “regulators will need to consider the suitability of requiring AI system operators to provide an appropriate justification” for a decision (our emphasis). The Law Society is quick to point out that this wide discretion may cause cross-sector uncertainty.
The ACCA’s response
The ACCA is the global professional body for professional accountants. Its response to the White Paper was published on 27 July 2023(9).
The ACCA’s response supports the White Paper and the Principles, which are deemed “well suited” to a cross-sector approach. However, the potential practical difficulties in adopting the Principles when different sectors have different regulatory environments is raised. For example, the financial services and legal sectors have stringent regulatory requirements which are very visibly enforced, whilst other sectors, like recruitment, are not widely regulated. To combat this, the ACCA calls for prescriptive sector-specific guidance, with further comment on the support to be provided to sectors navigating existing regulation, such as the Equality Act 2010, the Unfair Trading Consumer Protections Act 2008, and the Modern Slavery Act, that indirectly impacts the use of AI.
The ACCA echoes the Law Society’s concerns over the government’s reluctancy to incorporate the Principles in legislation and calls for refinement of the approach to ensure SMEs are not inadvertently disadvantaged. For example, unlike large firms who may even have in-house regulatory advisors, SMEs do not have the resources or expertise to commit to AI systems without certainty over its regulation. The ACCA highlights that SMEs drive innovation and have the potential to bring about transformative industry-wide change. It is therefore important that any regulation does not create a monopoly and stifle innovation originating from SMEs.
Finally, since many of ACCA’s stakeholders operate internationally, there is a concern that a lack of statutory clarity may limit harmonisation across jurisdictions. Businesses may opt to comply with more prescriptive international legislation to limit the risk of non-compliance. If the UK wishes to remain a global leader in AI, then the ACCA recommends it aligns itself with international jurisdictions and provide greater regulatory certainty to support cross-border operations, or it risks businesses opting for an alternative regime to limit exposure, save costs and simplify internal procedures. In the absence of a centralised regulatory entity, the ACCA recommend that the government provides sufficient training to board members in data governance, AI systems and ethics so that companies can undertake their own AI risk assessments.
Concluding thoughts
The rapid growth of AI and its accompanying uncertainty is something that both the government and regulators will need to keep a keen under continuing review. As both the technology and range of applications change, regulators will need to remain nimble to endure that regulation remains appropriate to the task.
Training of staff involved in development, regulation and implementation of AI systems appears to be crucial. There are obvious cost and time implications for such measures, but without them, there are increased risks of harm being caused by ‘unintended consequences’ of AI technology implementation.
There is an obvious lack of regulatory certainty: however, this must be balanced with an essential degree of flexibility. Professionals should not be ‘left to guess’ what their responsibilities and expectations are; without clearly defined regulations, firms and individuals may be hesitant to engage with AI over fears of liability.
The lack of clarity in the White Paper may lead to the adoption and interpretation of different definitions by different (or even the same) regulators, which will cause inconsistent practices. The unpredictability of AI also requires both local and international harmonisation, and the government will need to provide adequate resources to regulatory bodies to handle their new duties.
It remains to be seen whether the proposed regulatory framework is sufficiently robust to attract investors and innovators to the UK and drive the responsible innovation the UK Government aims to protect.
(1) A pro-innovation approach to AI regulation – GOV.UK (www.gov.uk)
(2) Godfather of AI’ Geoffrey Hinton quits Google and warns over dangers of misinformation | Google | The Guardian
(3) No, the Experts Don’t Think Superintelligent AI is a Threat to Humanity | MIT Technology Review
(4) European Commission ‘Proposal For A Regulation Of The European Parliament And Of The Council Laying Down Harmonised Rules On Artificial Intelligence (Artificial Intelligence Act) And Amending Certain Union Legislative Acts.
Brussels, 21.4.2021 COM(2021) 206 final 2021/0106 (COD) {SEC(2021) 167 final} – {SWD(2021) 84 final} – {SWD(2021) 85 final}
(5) EU AI Act: first regulation on artificial intelligence | News | European Parliament (europa.eu)
(6) EU lawmakers vote for tougher AI rules as draft moves to final stage | Reuters
(7) Pause Giant AI Experiments: An Open Letter – Future of Life Institute
(8) A pro-innovation approach to AI regulation – Law Society response | The Law Society; Law Society response to UK Government white paper: A pro-innovation approach to AI regulation June 2023 (pdf)
Need some more information? Make an enquiry below.
Subscribe
Please add your details and your areas of interest below
Article contributors
Partner - Head of IP & Data Protection
Specialising in Intellectual property disputes, Data protection, Digital, Intellectual property and Manufacturing
Associate
Specialising in Banking & financial disputes, Commercial disputes and Corporate recovery, restructuring & insolvency
Associate
Specialising in UK trusts, tax & estate planning
Enjoy reading our articles? why not subscribe to notifications so you’ll never miss one?
Subscribe to our articlesPlease note that Collyer Bristow provides this service during office hours for general information and enquiries only and that no legal or other professional advice will be provided over the WhatsApp platform. Please also note that if you choose to use this platform your personal data is likely to be processed outside the UK and EEA, including in the US. Appropriate legal or other professional opinion should be taken before taking or omitting to take any action in respect of any specific problem. Collyer Bristow LLP accepts no liability for any loss or damage which may arise from reliance on information provided. All information will be deleted immediately upon completion of a conversation.
Close