Emerging technologies and privacy

March 2025  |  SPECIAL REPORT: DATA PRIVACY & CYBER SECURITY

Financier Worldwide Magazine

March 2025 Issue


Artificial intelligence (AI) platforms, such as ChatGPT, Oracle, Microsoft Copilot and Google Gemini, along with other emerging technologies including quantum computing, blockchain and 5G, are becoming increasingly integral to business operations and daily life, fundamentally transforming how we work and communicate.

This article examines the potential implications of these emerging technologies, with particular focus on their impact on privacy rights within the UK.

Understanding emerging technologies

Emerging technologies are innovative technologies whose development or practical applications are not fully realised. These technologies are generally new, but also include technologies that are currently being developed, or which are expected to be available in the next five to 10 years. These technologies often represent significant advances or updates to existing systems.

The use of emerging technologies creates groundbreaking opportunities to help businesses grow by cutting inefficiencies, automating tasks and minimising human errors. Using AI to assist with repetitive functions such as compliance, fraud detection and data analysis means that businesses have more time to focus their efforts in other areas, in particular within the financial sector.

Existing applications within the financial sector include: (i) JPMorgan Chase’s COiN contract intelligence tool, which uses AI and machine learning to review legal documents and extract data, presenting a significant saving in time spent reviewing documents; (ii) Mastercard’s use of AI to detect fraudulent transactions, aiming to save money in terms of potential losses; (iii) the use of blockchain based payment systems by banks to facilitate cost-effective settlement of cross-border transactions; and (iv) the use of robotic process automation by the Bank of England to streamline its compliance processes, reduce errors and save costs.

However, while the uses and benefits of emerging technologies can be attractive, the integration of these technologies remains complex and expensive, requiring dedicated teams and diverse stakeholder input. While large companies can invest heavily in AI-powered tools, such investments may be prohibitive for smaller businesses.

Impact of emerging technologies on privacy

The use of personal data represents a significant risk in emerging technologies, particularly in generative AI (GenAI) systems. Organisations must implement robust measures to prevent data protection breaches and maintain strict oversight of data collection, processing and storage practices; this requirement is backed by legislation.

The Data Protection Act (DPA) 2018 and the UK General Data Protection Regulation (GDPR) establish comprehensive requirements for any organisation using personal data in the UK. The Financial Conduct Authority (FCA) and Prudential Regulation Authority (PRA) provide additional oversight of technology implementation in the financial sector, enforcing standards for transparency, fairness and consumer protection.

Additionally, the European Union’s (EU’s) AI Act, which came into force in August 2024, introduces mandatory requirements and guidelines for high-risk AI applications, including requirements for risk management systems, data governance, human oversight and transparency.

These requirements must be considered by UK businesses operating in or selling to Europe, even if they are not established in the EU. The EU AI Act requires that AI systems must be developed and used in compliance with existing privacy and data protection rules, while processing data that meets high standards in terms of quality and integrity. The main take away from this is that data protection is a priority, and businesses must discipline themselves to these standards.

In practice, for businesses, the possession of personal data can often give a competitive edge and for consumers, passing on their personal data can often cause them uneasiness, so the protection of personal data is fundamental.

However, the following risks must be considered by any business seeking to make use of emerging technologies within the financial sector.

First, emerging technologies can collect large amounts of data such as transactional and operational logs. If devices lack robust encryption and are vulnerable, the use of external technology providers to input sensitive information can breach client confidentiality and legal privilege. Businesses must undertake and implement data protection impact assessments to ensure transparency in how this data is processed and stored.

Second, AI tools can reflect biases that could demonstrate unfair lending practices or discriminatory risk assessments, and the lack of transparency in documenting data sources and processes can hinder the ability to audit these systems for fairness and compliance with the UK GDPR. There needs to be rigorous scrutiny of data provenance, ensuring data is diverse and representative, and as disclosure of data lineage is not currently mandated in legal tech applications, this presents a risk.

Third, platforms that collect and process large amounts of financial data including customer inquiries and case histories without proper privacy policies and clear data sharing practices can lead to data being misused or accidently shared with third parties, which could undermine public trust. Compliance with the UK GDPR principles of data minimisation and purpose limitation is key.

Lastly, organisations within the financial sector may find that their professional indemnity insurance policies do not explicitly cover AI-related risks, including technological failures or cyber threats. While emerging technologies can enhance efficiency, these technologies still require human oversight to ensure that the outputs are correct, particularly in the context of regulated activities such as financial advice. If an AI system overlooks key clauses, provides incorrect advice or ‘hallucinations’ and is relied upon, then there is a gap that firms must seek to address via insurance policies.

One of the biggest risks to businesses using GenAI within their business is the underlying deployment of large language models (LLMs) which are used to train GenAI. LLMs retain and learn from the information that is inputted into the LLM and it is often unknown where and how the inputted data is stored and how that data is being manipulated in the background, making compliance with the DPA 2018 and the UK GDPR challenging.

The key risks of LLMs are that confidential information inputted into the model could resurface in responses to other users, potentially causing data protection breaches, and that proprietary or sensitive information could become part of the model’s knowledge base or commercial information be inadvertently shared, resulting in a breach of competition law. Businesses must therefore implement strict data governance policies and technical controls to mitigate these risks.

Some GenAI solutions have attempted to deal with the issue of privacy by giving users the option to opt out of model training, however this option is not always prominent, and any data inputted into the tool may still be stored. Despite features such as user deidentification and other data protection measures, the security of sensitive information in these GenAI systems remains questionable.

In addition to the ability to opt out of model training, many suppliers now allow for the deployment of AI tools on private clouds to ensure the data sits entirely within the organisation’s network, preventing exposure to public cloud services. While the data might still be unremovable, this certainly reduces the risk of data handling breaches as the data is isolated from the initial use.

What does the future hold?

As technology outpaces existing regulations, privacy laws are expected to evolve to tackle these new challenges. While current UK legislation has up until now adequately dealt with UK privacy rights, the UK government recognises in its responsible AI report – ‘Assuring a responsible future for AI’ – that AI poses a privacy risk.

Following this, regulators are expected to introduce new measures such as stricter controls on data processing in AI systems and enhanced frameworks for automated decision making. The EU AI Act pushes for stronger safeguards against algorithmic biases, data exploitation and surveillance practices. The UK is also likely to see a shift toward sector specific privacy regulations.

While the UK is yet to finalise its AI regulation strategy, the government has adopted a sector-specific regulatory approach aimed at encouraging innovation. As the UK AI regulations are being formulated, it is essential that businesses continue to implement privacy by design rules and to input only essential data into AI systems.

Preventative measures rather than remedial measures are urged to avoid legislative breaches. As such, businesses that utilise AI must only implement software that ensures privacy compliance from the outset to protect client data. Thorough supplier due diligence should be undertaken before using any emerging technology and employees should be given full training to understand the risks associated with inputting client data into such platforms.

Conclusion

The UK will likely be shaped by ongoing developments in technology, but privacy laws are expected to be updated to address emerging risks such as data exploitation and algorithmic bias.

Innovations such as blockchain and AI privacy tools show promise for enhancing data security but dialogue between technology developers, regulators and privacy advocates is essential to ensure there is a balance between allowing for innovation, protecting individual privacy rights and ensuring a creative digital future.

Using technology to store data using blockchain technology is a decentralised way to do so and is a powerful tool. The UK government’s recent response to the ‘AI Opportunities Action Plan’ demonstrates the UK’s commitment to AI with its investment into a new supercomputing facility increasing the capacity of the national AI Research Resource.

Increasing the country’s infrastructure is an attempt to make the UK a first choice for AI firms, provided that effective regulation is developed to keep up with the pace of change.

 

Victoria Robertson is a partner, Matt Whelan is a senior associate and Chris Doherty is an associate at Trowers & Hamlins LLP. Ms Robertson can be contacted on +44 (0)161 838 2027 or by email: vrobertson@trowers.com. Mr Whelan can be contacted on +44 (0)121 203 5651 or by email: mwhelan@trowers.com. Mr Doherty can be contacted on +44 (0)161 838 2126 or by email: cdoherty@trowers.com.

© Financier Worldwide


©2001-2025 Financier Worldwide Ltd. All rights reserved. Any statements expressed on this website are understood to be general opinions and should not be relied upon as legal, financial or any other form of professional advice. Opinions expressed do not necessarily represent the views of the authors’ current or previous employers, or clients. The publisher, authors and authors' firms are not responsible for any loss third parties may suffer in connection with information or materials presented on this website, or use of any such information or materials by any third parties.