AI: risk and governance
January 2025 | SPOTLIGHT | RISK MANAGEMENT
Financier Worldwide Magazine
January 2025 Issue
Artificial intelligence (AI) is transforming industries across the globe, revolutionising the way organisations operate and how decisions are made. From automating routine tasks to providing advanced data insights, AI has become a critical tool in modern business strategy.
However, with the increasing reliance on AI comes the need to manage the associated risks and governance challenges – especially as it relates to the workforce. The potential for biased outcomes, regulatory compliance complexities, workforce disruption and data privacy concerns necessitates a robust framework for responsible AI governance.
Ethical considerations
One of the primary ethical concerns surrounding AI is the potential for algorithms to perpetuate or exacerbate existing biases. AI learns from historical data, and if that data reflects societal biases – whether based on gender, race or socioeconomic status – AI can inadvertently reinforce those biases in decision-making processes. This can be particularly problematic in areas such as recruitment, performance evaluation and employee engagement, where biased outcomes can lead to potential legal risks, reputational damage and a lack of diversity in the workplace.
For example, several high-profile cases have demonstrated the dangers of biased algorithms in hiring practices. A recruitment AI might favour male candidates over equally qualified female candidates because it was trained on a dataset that reflects historical male dominance in certain industries. These kinds of biases not only violate fairness and equality principles but also undermine organisational efforts to foster diversity and inclusion.
One critical approach to address this challenge is to build AI systems on diverse datasets that reflect a wide range of perspectives and experiences. By collaborating with data scientists and IT teams, organisations can develop AI systems that prioritise fairness and help reduce the potential risk of biased outcomes. Moreover, they can implement training programmes to raise awareness among employees and management about the importance of bias mitigation in AI decision making.
In addition to technical solutions, organisations should promote a culture that values inclusivity and ethical AI use. This includes advocating for AI designs that reflect an organisation’s commitment to diversity and fairness, as well as continuously monitoring AI systems for potential biases as new data is introduced.
Transparency and accountability
Transparency in AI decision making is a critical component of effective governance. The opaque nature of many AI systems presents significant challenges, as even the developers or operators of the systems may not fully understand how decisions are made. This lack of transparency can lead to distrust among stakeholders – whether employees, customers or regulators – and can create significant legal and ethical risks.
One key to building trust is ensuring that AI-driven decisions are transparent and explainable. Organisations should establish clear processes for documenting how AI functions, how they make decisions and what data they rely on. This requires close collaboration among data scientists, IT, human resources (HR) and legal teams to ensure that AI is not only technically sound but also aligned with organisational values and compliance requirements.
Accountability is another critical aspect of AI governance. Organisations should define clear lines of responsibility for AI-related decisions, particularly when something goes wrong. If an AI system makes a biased hiring decision, for example, who is ultimately responsible – the AI developer, the HR department or the executive leadership team? Establishing accountability can help make individuals and teams aware they are responsible for AI outcomes and that appropriate checks and balances need to be in place to address issues as they arise.
By fostering transparency and accountability, organisations can help build trust in AI, both within the organisation and externally. Transparent AI systems are more likely to gain the support of employees, stakeholders and regulators, helping organisations navigate the complexities of AI adoption with greater confidence.
Regulatory compliance
As AI adoption increases, so too does the regulatory scrutiny surrounding its use. Governments around the world are introducing new regulations to ensure that AI technologies are deployed ethically, safely and securely. The European Union’s AI Act, for example, is one of the most comprehensive regulatory frameworks aimed at addressing the risks associated with AI. The Act focuses on ensuring that AI systems are transparent, accountable and free from harmful biases, particularly in high-risk applications like healthcare, finance and human resources.
For organisations, staying ahead of regulatory changes can be a significant challenge. The rapid pace of AI development means that laws and regulations are continually evolving, and what is considered compliant today may not be tomorrow. Additionally, organisations need to ensure that their AI usage complies with existing regulations. For example, Title VII of the Civil Rights Act of 1964 provides federal protections for employees and applicants against discrimination on the basis of certain characteristics, including race, religion and gender, and the Age Discrimination in Employment Act prohibits age discrimination. An organisation using AI to screen applications may be disqualifying applicants who lack educational credentials, possibly inadvertently discriminating against older applicants or favouring a particular gender.
To help avoid these issues, organisations should consider conducting regular audits to assess the compliance of AI systems with data protection laws, anti-discrimination regulations and industry-specific standards. Additionally, organisations must stay informed about regulatory trends and adapt their AI governance strategies accordingly.
In addition to regulatory compliance, organisations should also prioritise workforce awareness. Training programmes that educate employees about AI-related compliance requirements and ethical considerations can help foster a culture of responsibility and accountability. By integrating compliance into the broader AI governance framework, organisations can potentially reduce the risk of legal penalties and reputational harm while demonstrating a commitment to ethical AI use.
Workforce impact
AI creates opportunities for job growth and upskilling. Organisations should focus on using AI to create new roles and enhance employee skillsets. This involves conducting workforce assessments to identify roles at risk of automation and developing targeted reskilling programmes. For instance, while AI may automate data entry tasks, it can also create roles requiring advanced data analysis skills.
Change management is critical throughout this transition. Organisations should communicate the potential benefits of AI, offer training and provide resources to help employees navigate changes. Leadership also plays a key role in this process, as visible and vocal support from organisational leaders can reduce resistance to change and foster a positive environment for AI adoption.
Data privacy and security
AI often relies on vast amounts of data, including sensitive employee information. As organisations implement AI within workforce processes – such as recruitment, performance management and employee engagement – it is critical that data privacy and security are prioritised.
As with any data-oriented solution or technology, organisations should implement robust data security measures to protect against breaches and cyber attacks. However, data privacy concerns are not limited to external threats, they also involve balancing the organisation’s need for data with employees’ privacy rights.
For example, AI used for monitoring employee performance can raise ethical questions about surveillance and data collection. Organisations should advocate for transparent data usage policies that clearly outline how employee data is collected, stored and used, as well as ensuring that these policies comply with data protection regulations like the General Data Protection Regulation.
By working closely with IT and legal teams, organisations can determine whether data privacy and security are effectively embedded into the AI governance framework. This not only protects employees’ rights but also builds trust in the organisation’s use of AI technologies.
Conclusion
The rapid adoption of AI presents both opportunities and challenges for organisations. AI’s transformative potential is undeniable, automating routine tasks, providing advanced data insights and enhancing productivity.
However, this technological advancement can also bring significant challenges and risks. Many organisations often struggle with implementation and adoption of new and evolving technologies and solutions, particularly those linked to AI-driven decisions.
By proactively addressing the risks associated with AI, organisations can demonstrate that it is implemented in a way that enhances business outcomes while maintaining ethical standards and protecting employees’ rights. This involves developing comprehensive strategies to reskill and upskill employees, preparing them for new roles created by AI advancements and implementing appropriate change management efforts to ease the transition. Lastly, it is crucial to promote ethical AI use by building AI systems on diverse datasets, prioritising fairness and continuously monitoring for potential biases.
Creating a culture of transparency, accountability and continuous learning is key to fostering a resilient organisation prepared for the future of AI. Organisations should establish clear processes for documenting how AI systems function, how they make decisions and what data they rely on.
Ensuring data privacy and security is paramount. Organisations should implement robust measures to protect sensitive employee information, thereby building trust in AI technologies. Fostering workforce awareness through comprehensive training programmes is also crucial. These programmes educate employees about AI-related compliance requirements and ethical considerations, helping to cultivate a culture of responsibility and accountability.
Ultimately, by addressing the challenges and leveraging the opportunities presented by AI, organisations can be better positioned to build a future where AI serves as a positive force for innovation and growth. By fostering a culture of transparency, accountability and continuous learning, organisations can navigate the complexities of AI adoption and build a future where AI enhances business outcomes while maintaining ethical standards and protecting employees’ rights.
Frank Giordano and Kalpita Ainapure are senior managers and Josh Rousselo is a manager at Deloitte. Mr Giordano can be contacted on +1 (914) 419 5740 or by email: fgiordano@deloitte.com. Ms Ainapure can be contacted on +1 (773) 676 6772 or by email: kainapure@deloitte.com. Mr Rousselo can be contacted on +1 (734) 735 5307 or by email: josh@deloitte.com. The authors would like to thank Reem Janho, a principal at Deloitte Consulting LLP, for her assistance with the preparation of this article.
© Financier Worldwide
BY
Frank Giordano, Kalpita Ainapure and Josh Rousselo
Deloitte