GDPR: time to explain your AI

August 2017  |  EXPERT BRIEFING  |  DATA PRIVACY

financierworldwide.com

 

One of the more promising applications of artificial intelligence (AI) is in regulatory compliance, an area that continues to get more complicated for financial institutions, many of which are struggling to implement IFRS 9, GDPR, PSD2 and other requirements simultaneously.

It is certainly true that AI can play a role in ensuring regulatory compliance. But regulations such as the General Data Protection Regulation (GDPR), which is designed to strengthen and unify data protection for every individual across the European Union, place certain transparency requirements on organisations that cannot be ignored. And transparency has never been AI’s strong suit.

Article 22 of GDPR, for example, concerns the use of data in decision-making that affects individuals, such as a person applying for a loan. Specifically, point one of the regulation says: “The data subject shall have the right not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her”.

Point two of Article 22 describes exclusions (including situations involving the person’s explicit consent, such as applying for a loan), but the key issue is point three which states: “the data controller shall implement suitable measures to safeguard the data subject’s rights and freedoms and legitimate interests, at least the right to obtain human intervention on the part of the controller, to express his or her point of view and to contest the decision”.

In risk applications, and with Article 22 of GDPR, customers need to have clear-cut reasons for how they were adversely impacted by a decision. Where the decision is driven by the model, the model needs to point clearly to the drivers of the negative scores. Since most credit decision models are scorecard-based, the answer to a question as to why a loan was not approved should be clear-cut.

What happens when your model was built with AI?

AI is a useful tool for enhancing credit risk scorecards, but it also has a reputation as a ‘black box’ technology; it struggles when it comes to explaining its decisions.

This is part of a wider problem associated with discrimination in the ‘digital single market’, a planned sector of the European single market that covers digital marketing, ecommerce and telecommunications. The propensity to use AI in decision-making in this area is much greater, and so, therefore, is the risk that individuals will be discriminated against based on factors such as geographic location.

For example, consider how offers for mobile phone service plans are calculated, and to whom they are offered. If a consumer feels they have been adversely affected by an AI-driven decision model and queries the decision-making process, it would be impossible for the operating company to simply blame the machine that made the decision.

Breaking through the black box

The solution to the challenge of ‘breaking AI out of the black box’ is explainable AI. The Defense Advanced Research Projects Agency (DARPA) says explainable AI produces more explainable models, while maintaining a high level of learning performance (prediction accuracy). It also enables human users to understand, appropriately trust and effectively manage the emerging generation of artificially intelligent partners.

There are several ways AI can be explained when it is used in a risk or regulatory context. One example is using scoring algorithms that inject noise and score additional data points around an actual data record being computed, to observe what features are driving the score. This technique is called local interpretable model-agnostic explanations (LIME), and it involves manipulating data variables in infinitesimal ways to see what moves score the most.

Alternatively, it is possible to deploy models that are built to express interpretability on top of inputs of the AI model. Examples here include and-or graphs (AOG) that try to associate concepts in deterministic subsets of input values, such that if the deterministic set is expressed, it could provide evidence-based ranking of how the AI reached its decision. These are often utilised and best described to make sense of images.

A third option is to deploy models that change the entire form of the AI to make the latent features exposable. This approach allows reasons to be driven into the latent features (learned features) internal to the model. This approach involves rethinking how to design an AI model from the ground up, with the view that it will need to explain latent features that drive outcomes. This is entirely different to how native neural network models learn.

GDPR is just one of a growing number of forces driving explainable AI. It is clear that as businesses continue to depend on AI to manage growing data sets and strict compliance regulations, explanation is essential. This is particularly important for the role played by AI in decisions that impact customers.

 

Dr Scott Zoldi is chief analytics officer at FICO.

© Financier Worldwide


BY

Dr Scott Zoldi

FICO


©2001-2024 Financier Worldwide Ltd. All rights reserved. Any statements expressed on this website are understood to be general opinions and should not be relied upon as legal, financial or any other form of professional advice. Opinions expressed do not necessarily represent the views of the authors’ current or previous employers, or clients. The publisher, authors and authors' firms are not responsible for any loss third parties may suffer in connection with information or materials presented on this website, or use of any such information or materials by any third parties.