Mastering AI governance: a strategic approach for corporate leaders

September 2024  |  SPOTLIGHT | RISK MANAGEMENT

Financier Worldwide Magazine

September 2024 Issue


In recent years, corporations have increasingly leveraged artificial intelligence (AI) to enhance efficiency, generate insights, innovate products and gain competitive advantages.

However, the advent of generative AI (GenAI), capable of producing new content such as text, code, images and videos, has significantly escalated the need for robust governance in organisations. Many corporate leaders now grapple with establishing a balance between innovation, responsibility and compliance in the face of rapidly advancing AI capabilities.

Fortunately, most companies can build upon existing governance programmes and frameworks to support AI governance. Organisations already using traditional AI for supply chain optimisation, demand planning, customer segmentation and data analytics should have been utilising their existing structures, including risk management programmes, IT intake and prioritisation, procurement policies and technology vendor assessments. However, the introduction of GenAI for internal or external use introduces additional complexity that requires further consideration in existing governance and risk management processes.

Effective AI governance should be integrated with a company’s overall AI strategy, which includes the organisational structure to support AI – whether centralised, decentralised or federated. Organisations with decentralised AI use will require more rigorous governance processes and may face challenges in overseeing AI development, utilisation and risk. Additional considerations include the extent of AI democratisation throughout the company and who will be trained on model creation, modification and utilisation.

The technology strategy for AI forms the foundation for successful deployment, delivery and governance in any organisation and provides the basis for the overall AI strategy and AI governance. A company’s technology leadership should develop a platform, architectural and data strategy that enables consistent AI use while supporting fundamental AI governance principles.

While AI governance should not oversee other related frameworks like business portfolio strategy, IT project prioritisation or AI procurement, it should ensure that related functions incorporate necessary elements of AI oversight.

Managing AI governance

To manage the unique complexities of AI, most companies should consider establishing an AI governance board or forum, even if they already have comprehensive technology governance and risk management programmes. This group should comprise well-informed leaders from technology, legal, information security and privacy, human resources, strategy and corporate communications departments.

Meeting regularly, the board should discuss, document and examine key components of AI governance, maintaining a register of these components for AI uses within the organisation. The AI governance board should create a framework for oversight to distinguish and manage risks associated with AI rather than directly managing AI practices or risks.

Key components of AI governance

Key components of corporate AI governance include use case management, technology foundation, model training criteria, transparency and explainability, human-in-the-loop intervention, regulatory compliance, and privacy, security and confidentiality. While some of these areas may be governed by existing frameworks or policies, they may require adjustment to accommodate AI-specific concerns, as outlined below.

By establishing robust governance frameworks, companies can harness the transformative power of AI while mitigating risks and upholding ethical standards.

Managing use cases. Many companies set out to inventory all AI use cases in their organisations. However, this may prove to be a daunting and unwieldy effort. Firstly, many companies have been using AI for years. Therefore, they need to determine whether they are going to retroactively inventory use cases, which could be time and resource intensive. Rather than attempting to inventory all AI use cases, AI governance boards should focus on governing GenAI uses and traditional predictive AI uses that involve personally identifiable information (PII) or other risk factors such as discrimination or bias.

Managing use cases can be challenging due to the ubiquity of AI in many organisations. The board should also establish guardrails to define acceptable uses and determine if any uses should be prohibited in their organisation, such as creating deepfakes or directly accessing certain GenAI tools.

Recognising and supporting the technology strategy. The technology foundation of an organisation plays a crucial role in AI governance. A well-designed technology strategy involving a foundational AI and machine learning (ML) platform can help manage many AI risks through built-in safeguards and monitoring capabilities. These platforms can filter incoming and outgoing data, monitor for PII or confidential company information, and assist with the interpretability of model output. Organisations with highly decentralised AI or fragmented technology platforms may face greater challenges in governance, requiring more rigorous scrutiny and oversight.

Understanding model training criteria. The data, large language models and criteria shaping all models should be well understood by respective model and use case owners, especially for GenAI applications. Legal teams should provide guidance on permissible input or training data, including copyright considerations. Governance should also address the use of personal data, and demographic and sociographic information to evaluate potential impacts on fairness and ethical output. For third party AI and ML solutions, model training criteria and ongoing management documentation should be included in service agreements and reviewed prior to purchasing or deployment.

Transparency and explainability. Model owners and users must be able to understand and explain why a model produces specific outputs. If a model’s decision-making process cannot be explained, it has a greater likelihood of becoming undisciplined or potentially destructive in its output. When onboarding third party AI and ML solutions, procurement, IT and security processes should include assessments of model explainability, even though this may present challenges due to the proprietary nature of some models.

Human-in-the-loop intervention. AI governance should provide oversight for highly sensitive or risky AI applications and ensure that results are reviewed before being displayed to customers, employees or populating company systems. This is particularly important for brand-sensitive uses and vital for operational applications in critical infrastructure sectors such as utilities, travel and healthcare. If an AI use case inventory is maintained, those requiring human intervention should be denoted with an explanation of who is examining those sensitive uses for quality prior to being deployed.

Navigating the evolving regulatory environment. As the regulatory landscape for AI continues to evolve, legal team leadership must actively monitor changes and provide clarification and education within the organisation. Compliance with existing privacy laws, such as Europe’s General Data Protection Regulation or India’s Digital Personal Data Protection Bill, should be extended to a company’s AI practices.

Additionally, AI-specific regulations like the European Union’s AI Act need to be assessed for applicability to the organisation’s AI uses. Some of the AI-specific regulations may be more focused on platform or solution providers or sellers but may also extend to any type of enterprise utilisation of AI, especially when there is a risk of bias, ethical concerns, privacy or threats related to AI. Legal leadership must also survey liability trends related to AI in the environment.

Managing privacy, security and confidentiality. Security and privacy leadership must ensure that AI uses are incorporated into their existing practices. This includes assessing and monitoring AI technology within the organisation and establishing training and controls to defend against new attack vectors or fraud attempts using AI by threat agents.

Establishing an AI policy

Risks associated with AI can generally be managed through existing governance framework policies. For example, acceptable use policies often encompass ethics and confidentiality and can be amended to expressly reference acceptable use related to AI and GenAI. Some organisations may opt to create a separate AI policy to comprehensively address AI-specific legal, copyright, regulatory, confidentiality, security and ethical concerns.

Company culture and education to support AI

Fostering a company culture that supports responsible AI use is crucial. Internal communications and training programmes should be leveraged to promote data-driven decision-making and a culture of innovation. Citizen data scientists should be trained in creating models using simplified AI and ML platforms to maximise leverage within the organisation.

Employees should be educated on the company’s AI-related governance frameworks and policies and the risks associated with AI use and consumption, including the potential for GenAI technology to be exploited for malicious purposes such as spreading misinformation or engaging in fraud.

Corporate AI governance is a multifaceted and evolving challenge that demands a proactive and holistic approach. By establishing robust governance frameworks, companies can harness the transformative power of AI while mitigating risks and upholding ethical standards.

The journey toward effective AI governance requires leadership, transparency and a commitment to continuous improvement. As AI continues to reshape the corporate landscape, those who navigate this complex terrain with foresight and responsibility will be best positioned to thrive in the AI-driven future.

 

Janet Sherlock is chief digital and technology officer at Ralph Lauren. She can be contacted by email: janet.sherlock@ralphlauren.com.

Dr Janet Sherlock is a distinguished digital and technology executive with a track record in driving organisational transformation and digital innovation. With a background as chief digital and technology officer at Ralph Lauren and various P&L and tech leadership positions, she has consistently delivered results in e-commerce, AI, analytics and supply chain efficiency. Dr Sherlock is now embarking on an advisory practice, Org.Works, focused on helping chief executives, chief human resources officers, and boards align executive leadership structures with corporate strategies, particularly in technology-driven areas such as AI, analytics and innovation. Her practice will leverage her extensive experience to support growth and efficiency.

© Financier Worldwide


BY

Janet Sherlock


©2001-2024 Financier Worldwide Ltd. All rights reserved. Any statements expressed on this website are understood to be general opinions and should not be relied upon as legal, financial or any other form of professional advice. Opinions expressed do not necessarily represent the views of the authors’ current or previous employers, or clients. The publisher, authors and authors' firms are not responsible for any loss third parties may suffer in connection with information or materials presented on this website, or use of any such information or materials by any third parties.