Artificial intelligence

August 2018  |  SPECIAL REPORT: TECHNOLOGY IN BUSINESS: STRATEGY, COMPLIANCE & RISK

Financier Worldwide Magazine

August 2018 Issue


Artificial intelligence (AI) is set to transform the way many firms work. The World Bank and PwC believe AI threatens 40 percent to 80 percent of current jobs. There should be little doubt that AI is a disruptor like none before.

For institutions to harness the potential of AI, they will need to understand not only how to exploit these new technologies but also how to control the emerging risks they entail. We are only just beginning to contemplate how to manage the risks involved.

Some AI technologies are already being adopted at scale. Robotic process automation is carrying out routine processes from invoicing to provisioning disk space. Automation is an ideal technology for many back office processes and the number of use cases is growing rapidly. For example, one professional services firm we have seen saved 40 full time equivalents (FTEs) solely by automating the routine aspects of on-boarding new recruits. Automation is also set to have a dramatic effect on offshore outsourcing, and many vendors are committing to price reductions for application management, helpdesk and other services as automation becomes a key component of their offering.

The risks and regulatory concerns which automation brings are largely new and require thought to uncover and deal with. On the plus side, automation works 24/7, without human fallibility and frees up bright and capable people to do more complex work. On the downside, organisations are adopting smaller suppliers, losing human knowledge, locking in to new technologies and seeing a major upheaval of their workforce.

The risks presented by automation are only magnified by broader AI deployment, such as customer chatbots, roboadvice and smarter Know Your Customer (KYC) processes. These applications are just the start of a more complex world in which computers can understand our words and expressions and their context like never before. In the world of AI, computers can rapidly assess data better than any human in order to reach decisions. Siri, Alexa and other digital assistants are the beginning of a very different way for humans and technology to interact.

Regulators are alive to the issues. As part of their pro-competition mandate, the Financial Conduct Authority (FCA) is generally positive about data exploitation, machine learning and other AI phenomena. The FCA is working with the Bank of England toward more automated reporting and has been exploring its own use of AI as a regulatory enforcement tool. In the US, the Securities and Exchange Commission (SEC) has been using machine learning for analysis in order to spot outliers and trends in compliance. However, the Federal Reserve, the FCA and other financial regulators are cognisant that once AI becomes consumer facing, or is deployed as a regulatory tool by a firm, new risks follow. They welcome the potential of these new technologies, while signalling caution.

In other sectors, there have been a number of headlines which call into question the speed of adoption of AI. In 2016, Microsoft adopted an experimental ‘chatbot’ called Tay, which was subverted by the online community into becoming a racist, misogynist drug user until it was taken down. Self-driving cars may rapidly be safer than human drivers, but we know they have run through red lights and caused accidents while their AI is being tuned. In financial services, there will be a greater need to understand and monitor these technologies, and a good deal of care will be needed to launch live trading or client-facing technologies.

Other risks from AI can be more mundane, though no less important. These technologies are often being brought to market by a new range of FinTech and RegTech vendors, can be cloud based and often leave no intellectual property rights (IPR) in the hands of the customer. For many organisations, it will be important to assess the potential longevity of these new vendors and their products before deciding whether to bring them into the fold; and to plan for the eventuality that a firm or its clients may need to migrate to different technologies over time.

No doubt there is some way to go before we have established models for adopting AI in financial services. We must expect a degree more conservatism in the sector because the stakes are high. However, the upside of AI adoption will be phenomenal for granting banks and their customers better, faster insights and higher transactional efficiency. The outlook is for a more productive new economy based around the many new technologies which, together, we call AI.

An analogy for greater adoption of AI might be how cloud technologies have spread into institutions. We are entering a period where more firms are adopting cloud technologies with greater ease. There is better regulatory guidance, more experience and more mature risk models. It can still take a conservative bank a long time to close a cloud deal but the process is becoming quicker and smoother. It helps that cloud vendors such as Amazon are offering better regulatory protection to their customers and institutions can be increasingly confident on the regulatory response to cloud implementation.

If AI and automation follow a similar curve to cloud, expertise will grow and adoption will become easier. In the meantime, firms should incubate and grow AI at the right pace in order to reach wider adoption. AI will slowly but surely grow in maturity.

It ought to follow that for institutions to take on AI, they should implement new risk structures. A risk-based approach should consider issues from vendor lock-in to loss of knowledge on processes, and from data protection to understanding risk of endemic errors. These are business, technology, procurement, legal and regulatory challenges which require joined-up thinking. Ensuring cross-disciplinary teams can work seamlessly to deliver AI for the business means finding an integrated approach toward assessing and managing the risks.

While many AI projects will involve displacement of people, in the long-term, new jobs will hopefully emerge. The Manufacturer reports that manufacturing has lost 800,000 jobs to automation in the UK and in the same period gained 3.5 million higher skilled roles. In many other areas, jobs are being created in response to new technologies, from social media managers and user experience specialists, to Uber drivers. Since the industrial revolution, humans have shown a remarkable ability to keep themselves busy on ever-more productive work. Institutions need to think about the people and skills they will need in the AI world.

Hopes for new and interesting roles cannot detract from the severe disruption to the workforce we are likely to see in the near future, however. Presently, there are no central government plans to change the way we educate, train or redeploy citizens and it is likely that companies will have to learn how to manage these changes in their own long-term interests.

AI is advancing at a speed which makes it hard to understand how the world will look in 10 years time. All the same, we already know that AI can analyse data faster and better than humans. This will help in areas such as medical diagnosis, legal research, fraud detection and identifying market trends. AI, coupled with physical robots, can perform some complex tasks more accurately than humans, from driving to surgery. It can open up new ways of using computing through natural language and gesture, allowing us to do much more easily what we want to do, from finding something on TV to shopping for a mortgage. AI can also perform routine tasks highly effectively. While there are many more use cases to be discovered, AI is probably less useful to help us through illness, invent a new kind of corkscrew or ensure the range of systems a bank is using are fit for the next five years.

The new roles we create will help us to understand human problems and devise human solutions, to oversee choice of technologies and monitor what those technologies do. People are still likely to be pivotal in ensuring the institutions of the future continue to attract and retain business by providing the best possible service. We need to evolve our organisations to adopt new risk management structures capable of applying to these new technologies in their domain of application. We should engage now in working out how AI-led organisations will emerge and what they should best look like in the future.

 

Simon Briskman is a partner at Fieldfisher LLP. He can be contacted on +44 (0)20 7861 4145 or by email: simon.briskman@fieldfisher.com.

© Financier Worldwide


©2001-2024 Financier Worldwide Ltd. All rights reserved. Any statements expressed on this website are understood to be general opinions and should not be relied upon as legal, financial or any other form of professional advice. Opinions expressed do not necessarily represent the views of the authors’ current or previous employers, or clients. The publisher, authors and authors' firms are not responsible for any loss third parties may suffer in connection with information or materials presented on this website, or use of any such information or materials by any third parties.