Shorter Reads

How has AI impacted litigation?

AI has been implemented in numerous ways across various sectors and is increasingly becoming a feature of both our everyday and working lives. This article provides a snapshot of how AI has impacted litigation in particular, both here in the UK and further afield.

4 minute read

Published 18 August 2025

Authors

Share

Key information

  • Sectors
  • Digital

AI as a tool

Thompson Reuters conducted a survey of thousands of professionals across the legal, tax, trade, accounting and risk and compliance fields, and found that 63% of respondents have used AI in their work, with 12% using it regularly. Published in July 2024, it is anticipated those figures have only increased since.

Taking the financial services sector as an example, we’ve seen AI used in customer service systems such as chatbots, to calculate insurance pricing or make credit underwriting decisions, in investment management by making real-time trading decisions, and in processing large quantities of data for the purposes of regulatory compliance – to name but a few. The implementation of such technology has been of great benefit but has also been a catalyst for claims and disputes, examples of which are set out below.

When lawyers are appointed to deal with such disputes, they too are increasingly utilising AI in their work. A key and longer-standing example is the use of AI to facilitate disclosure review, from the more basic functions of de-duplication to more recent developments including sentiment analysis. Research is another area, though this comes with a host of caveats and warnings, primarily (i) data protection and (ii) hallucinations. We all need to be mindful of entering private or sensitive data into publicly available AI systems such as ChatGPT, as that data can be accessed by Open AI staff or subcontractors to train the model, potentially in breach of your duty to keep such information confidential. We also need to remember that generative AI can hallucinate, essentially providing you with false information in response to your queries. This has caught out several lawyers who produced fake case reports to the courts, with examples published from New York, Canada and more recently here in London.

UK Litigation

AI litigation is a developing area, the first ‘wave’ focused on breaches of IP rights, now moving into breach of contract and misrepresentation claims to name a few. Below we take a look at two key cases brought here in England.

Tyndaris SAM v MMWWVWM Ltd [2020] EWHC 778 (Comm)

Tyndaris is a Monaco-based investment manager who signed a contract with VWM, pursuant to which Tyndaris agreed to manage an investment account for VWM using an AI-powered system that made the investment decisions (“AI System”). VWM had sought an investment fund that would trade without any human intervention, to remove bias and emotion.

Tyndaris had claimed its AI System was capable of applying machine learning to real-time news, social media data and other sources – essentially representing that it could remain up to date so as to predict sentiment in the financial markets. Tyndaris also confirmed that its AI System had been subject to sufficient testing.

Live trading started in late 2017 and VWM quickly suffered losses of c.22m USD. It wrote to Tyndaris demanding that trading be suspended immediately.

Perhaps surprisingly, it was Tyndaris who then commenced the litigation, in the form of a breach of contract claim brought in the English High Courts seeking c.3m USD in unpaid management fees from VWM. Less surprisingly, VWM counterclaimed – seeking to recover its losses on the basis that misrepresentations had been made by Tyndaris regarding the capabilities of its AI System.

The court was to determine various issues, including:

  • How did the AI System operate?
  • What did Tyndaris say about how the AI System would operate?
  • What testing did Tyndaris conduct on the AI System before it was advertised?
  • What level of human intervention is appropriate in these circumstances?

Unfortunately for us, but not necessarily the parties, this matter settled in May 2020 so there has not yet been a UK court judgment on those issues. But it certainly demonstrates the type of claims and issues we expect to see more of, and highlights the type of questions you should be asking yourself if you’re offering or purchasing services that utilise or rely on AI.

Getty Images (US) Inc v Stability AI Ltd [2025] EWHC 38 (Ch)

Getty is a visual media company who licences stock photography images, and Stability is a developer of generative AI systems. In short, this case concerns claims by Getty that Stability infringed its IP rights by using substantial amounts of Getty’s content without permission to train and develop Stability’s AI models.

The three-week trial took place in June 2025, and we are awaiting judgment. There were over 60 issues for the High Court to address, the headlines being (1) trademark infringement and passing off (2) copyright infringement and (3) database right infringement.

Of particular note for our purposes was the fact that Getty’s claim included a representative action on behalf of around 50,000 content creators and photographers whose work was licensed exclusively to Getty. With AI being increasingly incorporated into service industries, it has the potential to impact a substantial number of customers at any given time which we anticipate will result in more group action claims moving forwards.

At an interim application hearing the court concluded that Getty’s represented group was not sufficiently clearly or precisely defined, so the representative action could not proceed. Future group action claims such as those we anticipate will need to be carefully considered, as it’s not straightforward to bring such actions.

Wider Trends

In Canada there has been an AI-related dispute that has reached trial. A smaller matter before The Civil Resolution Tribunal, but nonetheless a clear example of the claims that may be brought when AI chatbots, very commonly used across a number of industries, produce incorrect information.

Moffatt v Air Canada [2024] BCCRT 149

Mr Moffat used the chatbot on Air Canada’s website to enquire about the airline’s bereavement fares policy for passengers who travel immediately after the death of a close family member. The chatbot told him he could apply for a reduced bereavement fare retrospectively by completing the relevant form within 90 days of his ticket being issued. It also provided a link to the actual bereavement policy, which stated you could only request such a discount before travelling. Mr Moffat didn’t look at the link and instead relied on the chatbot’s incorrect advice. He travelled, tried to retrospectively apply for the reduced fare, and was refused.

The Tribunal found that Air Canada had negligently misrepresented the bereavement policy and ordered it to pay Mr Moffat damages in the difference between the air fee he paid and what he would have paid with the bereavement discount.

One of the arguments by Air Canada was that the chatbot was an agent, servant or representative. The Tribunal found it remarkable that the chatbot was suggested to be a separate legal entity responsible for its own actions and concluded that the chatbot was actually a component of Air Canada’s website for which it was responsible.

Key Takeaways

AI litigation is still in its relatively early stages, and given the rate at which AI is developing there are likely going to be a range of claims that we haven’t even thought of yet. But currently the most prevalent causes of action, and the areas we anticipate continued growth in, are:

  • IP disputes
  • Breach of contract claims; and
  • Misrepresentation claims.

Businesses need to be mindful of these matters and treat such cases as a warning. For example, if you are offering AI services you need to be thorough in the ongoing testing of the software and ensure that any representations you make regarding your AI systems are accurate and that those systems are operating as expected.

If you have any questions – on pre-emptive steps you can take to reduce your litigation risk, concerning a current dispute, or otherwise – please do not hesitate to contact our team here at Collyer Bristow.

Related latest updates
PREV NEXT

Related content

Arrow Back to Insights

Shorter Reads

How has AI impacted litigation?

AI has been implemented in numerous ways across various sectors and is increasingly becoming a feature of both our everyday and working lives. This article provides a snapshot of how AI has impacted litigation in particular, both here in the UK and further afield.

Published 18 August 2025

Associated sectors / services

Authors

AI as a tool

Thompson Reuters conducted a survey of thousands of professionals across the legal, tax, trade, accounting and risk and compliance fields, and found that 63% of respondents have used AI in their work, with 12% using it regularly. Published in July 2024, it is anticipated those figures have only increased since.

Taking the financial services sector as an example, we’ve seen AI used in customer service systems such as chatbots, to calculate insurance pricing or make credit underwriting decisions, in investment management by making real-time trading decisions, and in processing large quantities of data for the purposes of regulatory compliance – to name but a few. The implementation of such technology has been of great benefit but has also been a catalyst for claims and disputes, examples of which are set out below.

When lawyers are appointed to deal with such disputes, they too are increasingly utilising AI in their work. A key and longer-standing example is the use of AI to facilitate disclosure review, from the more basic functions of de-duplication to more recent developments including sentiment analysis. Research is another area, though this comes with a host of caveats and warnings, primarily (i) data protection and (ii) hallucinations. We all need to be mindful of entering private or sensitive data into publicly available AI systems such as ChatGPT, as that data can be accessed by Open AI staff or subcontractors to train the model, potentially in breach of your duty to keep such information confidential. We also need to remember that generative AI can hallucinate, essentially providing you with false information in response to your queries. This has caught out several lawyers who produced fake case reports to the courts, with examples published from New York, Canada and more recently here in London.

UK Litigation

AI litigation is a developing area, the first ‘wave’ focused on breaches of IP rights, now moving into breach of contract and misrepresentation claims to name a few. Below we take a look at two key cases brought here in England.

Tyndaris SAM v MMWWVWM Ltd [2020] EWHC 778 (Comm)

Tyndaris is a Monaco-based investment manager who signed a contract with VWM, pursuant to which Tyndaris agreed to manage an investment account for VWM using an AI-powered system that made the investment decisions (“AI System”). VWM had sought an investment fund that would trade without any human intervention, to remove bias and emotion.

Tyndaris had claimed its AI System was capable of applying machine learning to real-time news, social media data and other sources – essentially representing that it could remain up to date so as to predict sentiment in the financial markets. Tyndaris also confirmed that its AI System had been subject to sufficient testing.

Live trading started in late 2017 and VWM quickly suffered losses of c.22m USD. It wrote to Tyndaris demanding that trading be suspended immediately.

Perhaps surprisingly, it was Tyndaris who then commenced the litigation, in the form of a breach of contract claim brought in the English High Courts seeking c.3m USD in unpaid management fees from VWM. Less surprisingly, VWM counterclaimed – seeking to recover its losses on the basis that misrepresentations had been made by Tyndaris regarding the capabilities of its AI System.

The court was to determine various issues, including:

  • How did the AI System operate?
  • What did Tyndaris say about how the AI System would operate?
  • What testing did Tyndaris conduct on the AI System before it was advertised?
  • What level of human intervention is appropriate in these circumstances?

Unfortunately for us, but not necessarily the parties, this matter settled in May 2020 so there has not yet been a UK court judgment on those issues. But it certainly demonstrates the type of claims and issues we expect to see more of, and highlights the type of questions you should be asking yourself if you’re offering or purchasing services that utilise or rely on AI.

Getty Images (US) Inc v Stability AI Ltd [2025] EWHC 38 (Ch)

Getty is a visual media company who licences stock photography images, and Stability is a developer of generative AI systems. In short, this case concerns claims by Getty that Stability infringed its IP rights by using substantial amounts of Getty’s content without permission to train and develop Stability’s AI models.

The three-week trial took place in June 2025, and we are awaiting judgment. There were over 60 issues for the High Court to address, the headlines being (1) trademark infringement and passing off (2) copyright infringement and (3) database right infringement.

Of particular note for our purposes was the fact that Getty’s claim included a representative action on behalf of around 50,000 content creators and photographers whose work was licensed exclusively to Getty. With AI being increasingly incorporated into service industries, it has the potential to impact a substantial number of customers at any given time which we anticipate will result in more group action claims moving forwards.

At an interim application hearing the court concluded that Getty’s represented group was not sufficiently clearly or precisely defined, so the representative action could not proceed. Future group action claims such as those we anticipate will need to be carefully considered, as it’s not straightforward to bring such actions.

Wider Trends

In Canada there has been an AI-related dispute that has reached trial. A smaller matter before The Civil Resolution Tribunal, but nonetheless a clear example of the claims that may be brought when AI chatbots, very commonly used across a number of industries, produce incorrect information.

Moffatt v Air Canada [2024] BCCRT 149

Mr Moffat used the chatbot on Air Canada’s website to enquire about the airline’s bereavement fares policy for passengers who travel immediately after the death of a close family member. The chatbot told him he could apply for a reduced bereavement fare retrospectively by completing the relevant form within 90 days of his ticket being issued. It also provided a link to the actual bereavement policy, which stated you could only request such a discount before travelling. Mr Moffat didn’t look at the link and instead relied on the chatbot’s incorrect advice. He travelled, tried to retrospectively apply for the reduced fare, and was refused.

The Tribunal found that Air Canada had negligently misrepresented the bereavement policy and ordered it to pay Mr Moffat damages in the difference between the air fee he paid and what he would have paid with the bereavement discount.

One of the arguments by Air Canada was that the chatbot was an agent, servant or representative. The Tribunal found it remarkable that the chatbot was suggested to be a separate legal entity responsible for its own actions and concluded that the chatbot was actually a component of Air Canada’s website for which it was responsible.

Key Takeaways

AI litigation is still in its relatively early stages, and given the rate at which AI is developing there are likely going to be a range of claims that we haven’t even thought of yet. But currently the most prevalent causes of action, and the areas we anticipate continued growth in, are:

  • IP disputes
  • Breach of contract claims; and
  • Misrepresentation claims.

Businesses need to be mindful of these matters and treat such cases as a warning. For example, if you are offering AI services you need to be thorough in the ongoing testing of the software and ensure that any representations you make regarding your AI systems are accurate and that those systems are operating as expected.

If you have any questions – on pre-emptive steps you can take to reduce your litigation risk, concerning a current dispute, or otherwise – please do not hesitate to contact our team here at Collyer Bristow.

Associated sectors / services

Authors

Need some more information? Make an enquiry below.

    Subscribe

    Please add your details and your areas of interest below

    Specialist sectors:

    Legal services:

    Other information:

    Jurisdictions of interest to you (other than UK):



    Enjoy reading our articles? why not subscribe to notifications so you’ll never miss one?

    Subscribe to our articles

    Message us on WhatsApp (calling not available)

    Please note that Collyer Bristow provides this service during office hours for general information and enquiries only and that no legal or other professional advice will be provided over the WhatsApp platform. Please also note that if you choose to use this platform your personal data is likely to be processed outside the UK and EEA, including in the US. Appropriate legal or other professional opinion should be taken before taking or omitting to take any action in respect of any specific problem. Collyer Bristow LLP accepts no liability for any loss or damage which may arise from reliance on information provided. All information will be deleted immediately upon completion of a conversation.

    I accept Close

    Close
    Scroll up
    ExpandNeed some help?Toggle

    < Back to menu

    I have an issue and need your help

    Scroll to see our A-Z list of expertise

    Get in touch

    Get in touch using our form below.



      Business Close
      Private Wealth Close
      Hot Topics Close