Mapping a sound AI business strategy across a landscape of uncertainty

March 2022  |  EXPERT BRIEFING  | BOARDROOM INTELLIGENCE

financierworldwide.com

 

Artificial intelligence (AI) – once the stuff of science fiction – is now part of everyday life. From tracking weather patterns, mapping traffic routes and recommending products for purchase, to improving food production and healthcare, smart technologies are leveraging the power of AI in an ever-increasing number of ways. AI can aid businesses in analysing data to improve decision making and automate processes while reducing operating costs and human error, but it comes with significant challenges and risks.

In an era of data-driven decision making, many businesses are already using AI. However, while strategies and policies for processing and retaining the data on which AI depends – particularly personal data – are becoming commonplace, few companies have articulated a strategy for AI. Legislation and regulation have pushed companies to adopt data privacy policies, but AI is developing far faster than regulation, or even internal company controls, can keep pace.

Governments are grappling with responses, but efforts to date primarily include frameworks, development plans, or long-term, high-level strategies focused on global collaboration and standards. Inevitable regulatory lag and policymakers’ often limited understanding of the technologies create risks for companies beyond the uncertainties that arise from the nature of AI itself, which makes creation of a coherent AI strategy all the more complex.

Businesses quickly need to craft strategies to better protect themselves against the risks and challenges of rapid AI growth, not simply in anticipation of, or in response to, regulatory compliance mandates, but for purposes of alignment with company missions and priorities. The challenges go beyond a company’s data architecture to a host of practical considerations.

Of course, there are questions of cost, but there also are important, nuanced issues that must be considered within the whole of the business enterprise. Business leaders need to formulate an AI strategy with an eye not only to the potential impact on mechanised processes, but also to company strategies directed at compliance, human resources, environmental, social and governance (ESG) policies, and so forth.

Where AI is implemented in an uncertain and nascent regulatory environment, issues related to responsibility for, limitations on, and desirability of the technologies are at the forefront. Where is the line between human decision making and automated decision making drawn, and how is accountability factored in? These are not merely liability questions, but ethical ones.

Readers are likely familiar with the example of the self-driving car that, when confronted by human interference that will cause an unavoidable fatality, will instantly calculate whether to spare the life of its passenger or that of someone in another vehicle or a bystander. The automated decisional calculus is dependent on the quality of available data and on the codes used to process that data.

What if there were a mistake in a code or a flaw in the data used by the AI in its split-second analysis? What value propositions are built into the analysis? Are there known limitations of the AI? For example, we know that facial recognition technologies can be prone to error when used on darker skin tones, which can lead to inequitable impacts when the programs are used. Do similar problems arise in other contexts?

AI is designed, initially, to perform specific tasks, with actions that are clear and predictable. As AI becomes more sophisticated and complex, it ‘learns’ faster than humans are capable of doing, which impacts statistical analyses and automated decision making, which, in turn, raises significant implications for accountability. How do we ensure that AI accords not only with business efficiency objectives, but also with changing notions of individual rights, equality, freedom from undue discrimination, and other moral considerations? Addressing these issues at the outset is especially important given the rising focus on ESG impacts of business models on society and is essential in planning for how these technologies will be used now and in the future.

Companies are well advised to get out in front of the issues by taking a few practical steps to develop roadmaps for a strategic AI plan. Some of the outlined steps may seem obvious, but AI is developing so quickly that only a small number of companies have even begun to articulate meaningful AI strategies. To create a roadmap for an effective AI strategy, including adoption of governance procedures to both mitigate risks and harness the benefits of AI, businesses can undertake a number of practical measures, including those outlined below.

Establish an AI taskforce and governance board. A governance unit will map out the strategic direction of the company’s implementation of AI across the organisation. This task is easier said than done given the significant uncertainties about where responsibility for AI sits, and how and whether AI is already being used. The governance unit must include direct involvement from top management, but also should incorporate a variety of people with diverse skills and responsibilities throughout the company, including members of departments that are often, if unintentionally, siloed from one another.

Depending on the type of company and the technology used, coordination will be required among board members, top management, IT, legal and compliance, data collection and privacy divisions, HR, marketing and client relations, R&D, and possibly outside experts, to name just a few. In addition to focusing on the needs and goals of the company, this group should assess best practices as strategies are developed around the world to determine if and how elements of those models might be adapted and applied.

Determine AI objectives, use cases and principles governing the company’s approach to AI. What benefits is the company trying to achieve? What processes does the company seek to automate? Is the company simply looking for efficiency gains or is AI potentially a core part of the company’s product or business model? Identify where AI-driven processes and technologies are already being used and evaluate the impact and structures that are in place related to it.

When defining objectives, the governance unit should also consider social and ethical principles so that these can be woven into the strategy from the beginning. This not only helps establish a benchmark for future use cases, but also will set the tone for discussions with suppliers, customers, regulators and other stakeholders. The objectives and principles should be institutionalised at every level of the business through leadership, communication, conduct guidelines, training and reward systems, and empowering employees and customers to raise concerns they encounter.

Conduct an AI risk assessment and create a governance plan. A holistic risk assessment of the entire enterprise should be undertaken, including evaluation of systems and controls already in place, potential risks specific to the company and the relevant industry, and a survey of existing regulatory, compliance and governance structures. It also should consider any ethical principles that have been defined or identified by the governance unit, along with any other social, environmental, economic or other principles that need to be aligned in the strategy. Importantly, the risk assessment will be an ongoing activity as the AI landscape evolves and technologies change.

Implement controls and monitor the impact of AI. If not already completed as part of a risk assessment, an inventory of AI systems within the organisation should be maintained along with documentation about what the systems do, how they work, what data is used, collected or stored, and what intellectual property rights are associated with them. Additionally, controls should include assessment and documentation of human oversight of the systems – who developed them and who is responsible for providing ongoing oversight, whether suppliers, company employees or contractors.

Insurance coverage and contract terms should also be assessed. With the lack of clarity about liability, rapidly changing technologies and uncertain regulation, insurance coverage and risk allocation provisions in contracts are helpful safeguards. It also will be important to build flexibility into any system of controls to allow for a rapidly evolving field with many unknowns.

Manage and maintain data. Most companies already have, or are in the process of establishing, systems to manage and maintain data in compliance with various data privacy regulatory regimes. AI requires a somewhat different approach. With AI, the quality of data is essential to a well-functioning system as AI relies on data to learn and function. Data sets should regularly be reviewed, enhanced and maintained in order to ensure that they are accurate and up to date, in addition to ensuring compliance with the various data privacy regimes that may be applicable to a company’s business operations.

Monitor, update and refine the company’s AI strategy. As with any company policy or strategy, the AI strategy must be monitored and updated to account for changes in objectives, technologies, legislation or compliance requirements, and risks. Close watch should be kept on risks associated with the evolution of technology. AI, by its nature, is designed to respond to changing data sets and to evolve accordingly. Given the complexity associated with this, it will be essential to remain vigilant regarding potential unintended results or impacts, or unforeseen or adverse effects. To manage these risks and to reap maximum benefits from the AI, continuous education of the governance unit, business leadership and those impacted by AI is essential.

Developing an AI strategy in a rapidly changing and uncertain regulatory and technological environment requires business leaders to accord a fair amount of trust – trust in their internal systems and controls, trust in the vendors, trust in the data that is processed by the technology, and trust in the AI technology itself. Developing a clear and comprehensive AI strategy is essential to mitigating the risks that are inherent in this environment while reaping the enormous benefits of AI technologies.

 

Giangiacomo Olivi is a partner and Jennifer Morrissey is counsel at Dentons. Mr Olivi can be contacted on +39 02 726 268 00 or by email: giangiacomo.olivi@dentons.com. Ms Morrissey can be contacted on +1 (202) 408 9112 or by email: jennifer.morrissey@dentons.com.

© Financier Worldwide


BY

Giangiacomo Olivi and Jennifer Morrissey

Dentons


©2001-2024 Financier Worldwide Ltd. All rights reserved. Any statements expressed on this website are understood to be general opinions and should not be relied upon as legal, financial or any other form of professional advice. Opinions expressed do not necessarily represent the views of the authors’ current or previous employers, or clients. The publisher, authors and authors' firms are not responsible for any loss third parties may suffer in connection with information or materials presented on this website, or use of any such information or materials by any third parties.