Risks and responsibilities: exploring AI liabilities
May 2025 | COVER STORY | RISK MANAGEMENT
Financier Worldwide Magazine
Artificial intelligence (AI) is increasingly pervasive in business and social life. From ChatGPT to chatbots, we are becoming more accustomed to its various applications simplifying work processes, handling mundane tasks and increasingly making decisions.
While AI offers tremendous potential for both businesses and individuals, its growing use also brings significant risks. Algorithmic bias, discrimination, deepfakes, privacy concerns and lack of transparency can erode trust in AI and the organisations that utilise it.
Bodies like the European Union, through initiatives such as the EU AI Act, are working to encourage the adoption of human-centric and trustworthy AI. Their goal is to ensure robust protection for health, safety, fundamental rights, democracy and the rule of law against the potential harms of AI systems, while also fostering innovation and supporting the internal market’s functionality.
While the push to make AI systems safe, transparent, traceable, non-discriminatory and environmentally friendly is highly commendable, it appears inevitable that AI-related disputes will rise globally in the coming years. Courts will face the challenge of applying traditional legal concepts to these emerging technologies.
Regulatory shifts in the EU
AI is an extremely complex issue, and AI liability even more so. There are currently no easy fixes for these issues. According to Katie Chandler, a partner at Taylor Wessing, the unique characteristics of AI systems pose novel liability questions, and it is unclear whether current regimes will be fit for purpose in compensating for damage suffered when AI systems fail.
“The amendments to the EU’s Product Liability Directive (PLD) seek to address some of these issues and bring AI systems into the strict product liability regime,” she explains. “This new legislation expands the scope of claims to cover AI systems and standalone software. But we are at a very early stage when it comes to considering AI and liability. No legal precedents have been set, and we need to see how the courts will interpret, for example, the new PLD and also apply existing laws such as negligence to questions of liability.
“This new legislation will make it easier for consumers to bring claims in respect of failing AI systems placed on the market in the EU, and the new presumptions of defect and causation significantly increase liability risk for AI developers and deployers,” she continues. “The opaque nature of this technology means that liability routes do not easily fit within the existing rules, and this is going to make liability difficult to assess.”
Marijn Storm, a partner at Morrison Foerster, agrees that the state of thinking around AI liability is still very much in flux – not least due to the decision to shelve the AI Liability Directive. “The EU’s proposed AI Liability Directive was intended to clarify liability around AI systems and provide for a reversed burden of proof, which entailed a rebuttable presumption of causality for anyone who suffered damages in relation to an AI system that did not comply with the requirements of the EU AI Act.
“The European Commission (EC) withdrew the proposed AI Liability Directive on 11 February 2025. The EU did, however, adopt an updated PLD in 2024. The updates broaden the definition of ‘product’ to also include software, and thus AI. The withdrawal of the AI Liability Directive signifies that the EU will integrate AI liability into its existing product liability and tort law frameworks, rather than creating a separate and specific regime for AI liability,” he adds.
For Dan Lucey, of counsel at McCann FitzGerald, the current environment creates a potent cocktail of risk, as there is commercial pressure to deploy AI even as the technology itself and surrounding legal regulations are nascent. “Furthermore, divergent policy approaches globally to the regulation of AI impact not just the compliance burden for organisations but the very nature and specifications of the technology itself; an AI system may be prohibited in one jurisdiction and legal elsewhere,” he points out.
Additionally, AI liability risk will be different for every organisation and industry. As AI systems and agents become more embedded in companies’ workflows, products and services, it is critical to analyse how this use of AI alters a company’s legal risk and where in the AI supply chain liability will fall if something goes wrong.
The ‘black box’ paradigm
One of the biggest challenges with AI, from a liability perspective, is the ‘black box’ nature of these systems. Opacity raises significant evidential issues when seeking to determine the cause of a malfunction or which party is responsible for damage caused.
“Not being able to see how the AI system has come to its decision, continuously learnt or been trained, and whether this can be traced back to the manufacturer or developer, and therefore a design defect, or deployer or end-user, means the root cause of the alleged damage and who is responsible for it will be very difficult to determine,” notes Ms Chandler. “This is one issue the new PLD is seeking to address, such that it would not be a bar to a consumer claim if they cannot read what is inside the black box.
“The presumptions of causation are designed to resolve the black box problem, making it easier for consumers to bring claims when the technical or scientific evidence is excessively difficult, or the AI system itself is too technically complex,” she continues. “If they can demonstrate the product contributed to the damage and it is likely it was defective or the defect is likely a cause of the damage, there will be a rebuttable presumption applied by the courts which the defendants will need to disprove.”
“One of the biggest challenges with AI, from a liability perspective, is the ‘black box’ nature of these systems. Opacity raises significant evidential issues when seeking to determine the cause of a malfunction or which party is responsible for damage caused.”
Additionally, the current generation of advanced AI models includes neural networks and large language models (LLMs), both of which are difficult to interpret. “Unlike traditional software, errors can often not be traced back to specific lines of code, as the AI models generate outputs based on complex probabilistic relationships, rather than deterministic rules,” says Mr Storm. “This makes the burden of proof the key topic in litigation.
“In many cases, neither the claimant nor the defendant will be able to evidence or disprove a claim. As such, the party that has the burden of proof is the party that will likely lose the case. This means that the withdrawal of the AI Liability Directive will have very real consequences, as the proposed reversal of the burden of proof is off the table,” he adds.
In the event of an AI malfunction, questions will inevitably arise about whether liability should fall on the developer or the end-user, or whether there is a case for holding the AI system itself accountable. According to Mr Lucey, AI systems are not yet at the level of autonomy to justify them having separate legal personality – and whether society should ever allow AI systems to reach that level is an important policy question.
“For now at least, we do not need to create a new legal fiction to hold accountable an AI system which remains in the realm of science fiction,” suggests Mr Lucey. “AI systems are created and used by organisations or individuals who must be conscious of their legal responsibilities. AI systems are tools and, like all tools, they can cause harm either because of how they were created or how they were used.
“In cases where individuals rather than organisations are harmed, the public policy underlying traditional strict product liability laws focuses on an economic argument that manufacturers, as those who benefit from the sale of products, are better placed to insure against and price-in liability costs, compared to consumers who might otherwise have no remedy. Similar logic may apply in the area of AI liability,” he adds.
Strict liability
From a legislative standpoint, there have been several significant developments in recent years. In the EU, the AI Act and the new PLD are among the most notable, often described as two sides of the same coin. “The regulatory and liability frameworks are closely connected, and it seems clear that any non-compliance with mandatory requirements under the AI Act will lead to increased strict liability risks under the new PLD,” explains Ms Chandler. “This is particularly given the expanded definition of ‘defect’ which takes into account mandatory safety standards as well as the easing of the burden of proof on claimants where there is non-compliance with relevant EU product safety regulations.”
Although the liability impact of the EU AI Act is not as comprehensive without the shelved AI Liability Directive, general tort laws will continue to apply. “In EU member states, this generally entails that anyone who causes damages by violating a legal obligation is held to compensate the suffered damages, “ says Mr Storm. “As such, even without the AI Liability Directive, there is a legal basis for compensating damages caused by AI-related incidents.”
As Mr Lucey notes, the policy framework in the EU essentially obliges operators to reduce the potential for risk from AI. “When an issue does arise, there are consequences for the operators, either through regulatory enforcement and fines or civil liability at the suit of the parties harmed,” he says. “That blend of public and private enforcement creates increased risk for organisations.
“Recent public comments by Henna Virkkunen, the European commissioner responsible for digital and frontier technologies, suggest that the enforcement of the AI Act will be ‘business-friendly’ – but what that means in practice remains to be seen. Notably, the recent policy direction in the EU on civil liability, like with the revised PLD, has explicitly been to make it easier for individuals to win when they sue organisations.”
Looking ahead, strict liability rules for AI developers are likely to influence innovation and the overall advancement of AI technologies. As Mr Storm notes, start-ups may be held accountable for damages caused by their AI systems, even if they adhere to best practices. However, a clear division of responsibility between AI system providers and deployers can actually foster innovation. Ultimately, developers, manufacturers and users will need to collaborate to mitigate liability risks and ensure the safe integration of AI systems.
Approaches to mitigating AI risks
As the AI landscape rapidly advances, companies must prepare for potential risks associated with AI failures. To manage and mitigate liability, they can take proactive steps to address pertinent issues.
Douglas McMahon, a partner at McCann FitzGerald, outlines three key ways companies can manage liability risks associated with AI failures.
First is through contractual protections. When purchasing AI systems from third parties, companies can negotiate contractual promises regarding the system's functionality and seek damages if these promises are broken. The effectiveness of this approach depends on the ability to negotiate protections, the supplier’s capacity to meet claims, and the risks of litigation.
Second is by managing liability to their own customers. In business-to-business contexts, risk allocation is flexible, but pushing the risk of AI failure onto customers depends on the specific context. Consumer protection laws may limit risk allocation to consumers.
And third is by implementing internal systems to reduce AI failure risks and quickly identify issues. This approach, though challenging, is preferable to relying solely on contractual protections and helps mitigate risks more effectively.
“Companies need to conduct thorough risk assessments covering data privacy concerns, cyber security protections and vulnerabilities, algorithm bias, and regulatory compliance,” suggests Ms Chandler. “Identifying any high-risk systems under the AI Act and formulating an AI compliance plan to ensure regulatory requirements are met is crucial. Companies should also consider the consequences of potential misuse and whether there are any practical steps which can be taken to guard against this – for example by providing user manuals clearly explaining how the AI technology works, how decisions are made and the dangers of misuse.
“Carrying out robust quality checks and audits, including in respect of data quality, to ensure AI systems are accurate and reliable is also recommended,” she continues. “Implementing security measures to ensure AI systems are protected from unauthorised access, misuse and cyber security risks is essential. Additionally, obtaining insurance coverage can help mitigate potential financial liabilities where AI-related risks materialise. Reviewing supplier contracts and updating terms such as exclusions, warranties and disclosure processes is also advised.”
Organisations should also implement and document their risk management measures throughout the AI lifecycle. “AI systems will not become less complex, and thus identifying a specific fault will remain troublesome,” says Mr Storm. “In such complex ecosystems, the defence lies in the design of the technology and being able to evidence that all reasonable risk management measures have been implemented in this context.”
Anticipating legal frameworks for AI liability
For now, case law framing how liability will be allocated when AI systems fail is limited. “It remains to be seen how quickly such decisions will emerge once the new PLD applies from December 2026 and if these significantly impact traditional liability concepts and laws,” says Ms Chandler.
Mr Storm expects new legal doctrines, and perhaps new legal frameworks, to address liability specifically in circumstances where AI and humans interact, such as self-driving vehicles. “The implementation of such technologies may require imposing a duty of care on the deployers of such AI tools, to prevent damages that cannot be compensated due to opaque responsibilities,” he explains.
The AI landscape will keep evolving in the coming years, and as regulators strive to keep up, companies must take proactive measures to protect themselves. Navigating AI liability will remain challenging, especially as policymakers update product liability laws.
© Financier Worldwide
BY
Richard Summerfield