Artificial Intelligence: An engagement guide

Severine Neervoort is Global Policy Director, and Wendela Rang is Policy Executive at International Corporate Governance Network (ICGN). This post is based on their ICGN memorandum.

1. Introduction

Artificial intelligence (AI) presents both extraordinary opportunities and complexities for today’s companies. The OECD defines an AI system as “a machine-based system that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments”.[1] An increasing number of companies are using AI to transform existing business models or create new ones, generate greater efficiencies, and enhance strategic decision-making, all of which are critical for their competitiveness. However, AI also poses risks and challenges that company boards and management teams must be able to understand and address.

Investors expect companies to effectively navigate AI-related challenges whilst maximising the benefits of AI integration. Our Investor Viewpoint aims to encourage a constructive dialogue on this fast-evolving and increasingly important technology. We consulted the available sources and engaged with our members to provide a guide for investors and companies. This Viewpoint supports investors in assessing whether a company uses AI in a safe, ethical, and sustainable manner, leading to a series of questions for use in investors-investee dialogue. Also, by proactively using this guide, boards can anticipate investors’ areas of interest and concern, and better assess the robustness of their AI oversight.

2. Investors’ expectations of companies

The rapid development and use of AI necessitates comprehensive AI governance and risk
management processes.[2] Board oversight, responsible practices, risk management and accountability to shareholders and stakeholders, and transparency and explainability, are
cornerstones for responsible AI.[3]

Board oversight

The board of directors (board) is accountable for overseeing a company’s responsible development and use of AI.[4] As part of its fiduciary duty to preserve and enhance long-term value, the board should ensure that the company management balances the competitive deployment of new technology against potential risks – including risks to people and society.

Boards should be able to explain to investors the extent to which the company approaches AI as a risk or as an opportunity, and its short, medium, and long-term plans for integrating AI as part of its business model.

Boards should ensure that they are properly equipped to oversee AI related risks and opportunities. Knowledge on AI can come from different channels, including trainings, advisory bodies, engagement with external experts, and continuous awareness programmes. Surveys suggest that many boards overestimate or lack AI knowledge.[5] If board members do not have sufficient AI expertise or lack access to advice, they may be ineffective in identifying and assessing material AI related risks and opportunities.[6]

Responsible AI practices

Companies should implement AI in a way that preserves trust in the company and prevents, as far as reasonably possible, economic, human, social, and environmental harm. Companies should implement AI governance and due diligence procedures proportionate to the potential impacts of their AI activities. Companies developing and training AI face distinct risks from those using third-party AI solutions, and bear responsibility for ensuring that their system programming promotes safe and ethical outcomes. Companies should envisage articulating their approach to AI in an overarching statement or a set of principles, and should embed responsible AI in existing policies, such as their Code of Conduct, Information Security, Data Ethics, Data Privacy and Vendor Assessment Policies.

AI’s impact on the workforce is a complex and evolving topic: AI raises major concerns about job displacement, changes in job roles, and the need for new skills. Whilst research suggests that AI could expose millions of full-time jobs to automation,[7] a study found that fewer than one in three CEOs had assessed the potential impact of generative AI (GenAI) on their workforces. GenAI refers to deep-learning models that can generate high-quality text, images, and other content based on the data they were trained on.[8] Awareness of the potential societal impact of mass automation and understanding the effects of the company’s use of AI on its workforce is critical and should be discussed at board level.[9] Companies should consider upskilling or reskilling efforts to address evolving talent needs.[10]

Furthermore, companies should define the scope of AI applications used in human capital management. This should reflect whether the company believes the technology could enable fairer, more efficient, transparent, and inclusive practices, how the technology is expected to deliver on this, and what might happen if it fails.[11] Companies should ensure that employees are trained to use AI in a responsible manner, with a good understanding of the technology and its potential impacts. Companies must address potential skewing of AI benefits distribution amongst the workforce and mitigate accessibility barriers for employees with disabilities.[12]

Regarding environmental risks, research shows that AI could support natural capital management through positive innovations. Identifying pollution, and mapping and measuring deforestation and the melting of icebergs are established examples.[13] However, AI uses more energy than other forms of computing, raising concerns about its own ecological footprint.[14] Training a single model uses more electricity than 100 homes consume in a year, and semiconductor production (which forms part of AI’s underlying computational hardware) is highly polluting.[15] These considerations must be included in companies’ decision-making.

Risk management

Boards should ensure that companies have strong risk management processes to identify,
assess and mitigate financially material AI-related risks, as well as potential adverse impacts
on society and the environment.

AI related risks include, among others:

  • Unwanted bias, when automated systems relying on biased data or design produce
    discriminatory outcomes, perpetuates inequalities in decision-making. Some
    companies have faced legal action after using AI systems allegedly reinforcing
    discriminatory outcomes.[16]
  • “Hallucinations”, referring to when AI generates false information.[17]
  • AI systems trained on inaccurate, outdated, or otherwise not fit for purpose data.[18]
  • Spread of mis/dis-information or harmful content through AI generated content.
  • Failure to evaluate risks of third-party AI. Research suggests that more than half of
    all AI failures come from third-party tools, which most companies rely on.[19]
  • Intellectual property (IP) infringement.[20]
  • Data security breaches, including hacking or privacy violations.
  • Technical malfunctioning, causing autonomously operated machines to endanger
    human life, for instance.

To mitigate risks, it is advisable to utilise AI under human supervision and intervention. Also,
initiating an AI use-case pilot together with sector, market, and product specialists before
integrating the system across the business can help identify areas of success and concern.[21]

To identify and address salient human rights issues or adverse impact on the environment, boards should ensure that management conducts impact assessments, audits, and due diligence.[22] Understanding the implications of the use of AI for human rights to privacy and data protection, equality and non-discrimination, and for social cohesion, for instance, are fundamental elements of AI governance.[23]

More generally, companies should conduct risk-based due diligence throughout their value chain, in line with the OECD Guidelines for Multinational Enterprises, and this should include AI-related considerations. This entails: (1) embedding responsible business conduct into policies and management systems, (2) identifying, and (3) ceasing, preventing, or mitigating potential or actual adverse impacts, (4) tracking results, (5) communicating, (6) providing grievance mechanisms and remediation when appropriate.[24]

Transparency and explainability

Transparency and explainability helps build trust and ensure accountability to shareholders, stakeholders, and society at large. Company management should be able to explain to their boards how the AI systems they develop or use have been designed, trained, tested and scaled, and how they align with human values and intent.[25] Companies developing and training AI must be transparent about what data the model has been trained on.

Furthermore, all companies should be transparent about how the AI systems they deploy collect, use, and store personal data.[26] According to emerging best practices, company management should ensure that stakeholders, such as customers and employees, have consented to their data being used by AI. Moreover, all stakeholders should be made aware of their interactions with AI systems and companies should be transparent about any content that is AI-generated.[27] Finally, investors expect timely disclosures of any material AI-related controversy.

Regulatory compliance

Whilst most countries do not, at the time of writing, have AI-specific regulation, there are frameworks that companies should follow (see Annex 1). Boards should ensure that company management implement existing regulation and relevant standards on responsible AI, such as OECD’s AI Principles, UNESCO’s Recommendation on the Ethics of AI, or ISO/IEC 421001:2023, which specifies requirements for establishing, implementing, and improving AI management systems in an organisation.[28] As AI regulation is an evolving policy area, boards and management should stay up to date with latest developments.

3. Stewardship dialogue

ICGN encourages shareholders to engage in a constructive dialogue with investee companies, with the objective of creating long-term value on behalf of beneficiaries or clients. When engaging with company boards and management teams on AI-related matters, investors can consider the following points, which boards can also use to better assess the robustness of their AI oversight.

  1. Has AI been considered in the development of the company’s strategy? Does the company (or is it planning to) develop or use AI and, if so, how?
  2. How does the board ensure that it has sufficient knowledge and understanding of AI, if deemed relevant for the company?
  3. Did the company publicly articulate its approach to responsible AI? Is responsible AI embedded in relevant company policies (e.g. Code of Conduct, Data privacy)?
  4. Which risk management processes were established to identify material AI-related risks and mitigate these? What are the key AI related risks for the company, and how are they being mitigated? Who would be held responsible for AI controversies?
  5. Have any biases and privacy issues been identified?
  6. Has the board discussed how the AI systems the company uses or develops have been designed, trained, and tested?
  7. How does the company assess the implications of its use of AI on the workforce?
  8. Is management planning to reskill or upskill employees affected by automation, and, if so, how will success be measured?
  9. How does management conduct risk-based due diligence to identify, and prevent or mitigate adverse impacts of its use of AI on society and the environment?
  10. How regularly does the board engage with its stakeholders on AI, including employees? Did the company establish a grievance mechanism for AI-related matters?

4. Conclusion

AI holds profound promise for boosting business efficiencies, productivity, and deriving intelligence, and companies failing to deploy AI may be exposed to a loss of competitiveness. However, there are material risks associated with the use of AI, and investors expect companies to use AI responsibly.

Boards should ensure the implementation of robust AI governance processes encompassing oversight and accountability, responsible AI practices, transparency and explainability, robust risk management, and regulatory compliance to support safe, ethical, and sustainable development and use of AI.

With the fast pace of technological change, and AI being used by an increasing number of companies across many economic sectors, best practices for responsible AI – in terms of governance, conduct, and reporting – will continue to evolve. It is in this spirit that ICGN encourages a constructive dialogue between investors, companies, policymakers, and standard-setters on this important topic.

Endnotes

1 OECD, ‘Recommendation of the Council on Artificial Intelligence’, OECD/LEGAL/0449, 2023(go back)

2Christine Chow, Mark Lewis, & Paris Will, ‘Future of Work: Investors’ Expectations on Ethical Artificial Intelligence in Human Capital Management’, 2022(go back)

3ICGN has incorporated Norge Bank Investment Management’s (NBIM) framework for responsible AI in this Viewpoint. See NBIM, “Responsible artificial intelligence”, 15 August 2023(go back)

4Holly J. Gregory, “AI and the Role of the Board of Directors”, Harvard Law School Forum on Corporate Governance, 07 October 2023; David Edelman & Vivek Sharma, “It’s Time for Boards to Take AI Seriously”, Harvard Business Review, 02 November 2023; Andrew Kakabadse and Nada Kakabadse, “What boards really need to know about AI”, Board Agenda, 04 August 2023; Chow et al, ibid.(go back)

5Ned On Board, “Artificial Intelligence and Boards: Governance recommendations for greater positive impact”, January 2023, surveyed 700 leaders on their organisational AI use of which 58% had no AI expertise on their boards or did not know the board members proficiency. 59% of board members were not aware of AI-related regulations. Institute of Directors (IoD), “AI in the Boardroom: The essential questions for your next board meeting”, 2022, found that 80% of IoD member boards did not have a process to audit their AI. 86% were using some form of AI without the board’s awareness.(go back)

6David Edelman and Vivek Sharma, “It’s Time for Boards to Take AI Seriously”, 02 November 2023.(go back)

7World Economic Forum, ‘The Future of Jobs Report 2020’; Goldman Sachs, “Generative AI could raise global GDP by 7%”, 05 April 2024(go back)

8IBM, “CEOs Embrace Generative AI as Productivity Jumps to the Top of their Agendas”, 27 June 2023; IBM, “What is generative AI?”, 20 April 2023(go back)

9The Alan Turing Institute, ‘Data science, artificial intelligence, and the futures of work’, October 2018(go back)

10U.S Government Accountability Office, ‘Workforce Automation: Insights into Skills and Training Programs for Impacted Workers’, 17 August 2022(go back)

11Chow et al., ‘Future of Work: Investors’ Expectations on Ethical Artificial Intelligence in Human Capital Management’, 2022; Paris Will, Dario Krpan and Grace Lordan, ‘People versus Machines: Introducing the HIRE Framework Artificial Intelligence Review’, 2023. Highlights that AI may have the potential to bring positive changes, such as advancing diversity, equity, and inclusion (DEI) initiatives in hiring. (go back)

12The Alan Turing Institute, ‘Data science, artificial intelligence, and the futures of work’, October 2018; U.S Government Accountability Office, ‘Workforce Automation: Insights into Skills and Training programs for Impacted Workers’, 17 August 2022(go back)

13World Economic Forum, “9 ways AI is helping tackle climate change”, 12 February 2024(go back)

14International Energy Agency, “Why AI and energy are the new power couple”, 02 November 2023(go back)

15International Energy Agency, ibid.(go back)

16Bloomberg Law, “Workday AI biased against black, older applicants, suit says”, 22 February 2023; Forbes, “Cigna sued over algorithm allegedly used to deny coverage to hundreds of thousands of patients”, 23 July 2023(go back)

17OECD, “What is an AI hallucination?”, n.d(go back)

18Leslie David, ‘A guide to AI ethics, including responsible design and implementation of AI systems in the public sector’, The Alan Turing Institute, 2019(go back)

19MIT Sloan Review, “Third-party AI tools pose increasing risks for organization”, 21 September 2023(go back)

20Gil Appel, Juliana Neelbauer, & David A. Scheweidel, “Generative AI has an intellectual property problem”, Harvard Business Review, 07 April 2023(go back)

21Use-cases define the steps that illustrate how a process will be carried out in a system. Elizabeth Larson and Richard Larson, ‘Use cases: what every project manager should know”, PA Project Management Institute, 2004(go back)

22UNECSO, ‘Ethics of Artificial Intelligence’, 2023(go back)

23Chatham House, “AI governance and human rights: Resetting the relationship”, January 2023(go back)

24OECD, ‘OECD Guidelines for Multinational Enterprises on Responsible Business Conduct’, 2023(go back)

25NBIM, “Responsible artificial intelligence”, 15 August 2023(go back)

26Information Commissioner’s Office, “How do we ensure transparency in AI?”, 15 March 2023(go back)

27Recommendation of the Council on Artificial Intelligence’, Transparency and Explainability (Principle 1.3), 2023(go back)

28OECD, ibid; UNESCO, ’Recommendations on the Ethics of AI’, 2021; International Standards Organization, ‘ISO/IEC 421001:2023 – Artificial Intelligence, Management system’, 2023(go back)

Trackbacks are closed, but you can post a comment.

Post a Comment

Your email is never published nor shared. Required fields are marked *

*
*

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>