• Published: Nov 11, 2023
  • share on linkedin
  • share article

Whether you are aware or not, people within your organisation are likely to be using AI—even just occasional use of generative AI tools such as ChatGPT.

This article is not intended for organisations that may be actively developing AI, which needs a level of sophisticated oversight beyond our scope here.

Possible applications for generative AI [1]

The list is long, but the main uses are:

  • conversational data mining, interrogating corporate datasets
  • document drafting—first-cut business documents 
  • customer engagement—routine queries
  • idea generation—contributing to brainstorming sessions.

Generative AI is not without risk

The main ones are:

  • data leaks. ChatGPT incorporates and learns from and uses your data.
  • hallucination. When the necessary data is absent, generative AI can fabricate an answer.
  • questionable sources. Much of the internet’s source material already contains error or bias.

Necessary oversight

In an excellent recent article [2] in Directors and Boards, the authors offer this very crisp risk assessment:

Generative AI are machines, not people. They have no knowledge, expertise, experience or qualifications — in any field whatsoever, not least corporate governance or business administration. Unlike directors, generative AI owes no fiduciary duties and faces no liability for breach. 

The European Union is about to pass the first major piece of legislation based on the need for their citizens to trust the use of AI. The approach is risk based and acknowledges that most current AI interaction is low risk, e.g, video games or spam filters. However, users should be aware when they are interacting with a machine (e.g, chatbots) so they can take an informed decision to continue or step back. 

By extension we suggest that the use of AI tools within any organisation should be obvious and transparent.

The draft European legislation [3] offers four classes of risk:

  • Unacceptable risk. Anything that poses a clear threat to the safety, livelihoods and rights of people will be banned. Systems that attempt ‘social scoring’ are cited as an example.
  • High risk covers a range of activities already creeping into everyday life, for example, CV sorting software, credit scoring, transport infrastructure, verifying travel documentation, predictive law enforcement and remote surgery. High-risk systems will need to be assessed, certified and registered.
  • Limited risk includes the use of tools such as chat bots.
  • Minimal risk, which covers most low-level AI.

Consistent with the European approach, it is necessary to understand when an AI system is in use, and why it has made a certain decision or prediction or taken a particular action. 

In 2019, the Hong Kong Monetary Authority issued a letter [4] to financial institutions noting that, ‘the board and senior management of banks should appreciate that they remain accountable for all AI-driven decisions’. They further note that AI applications should be explicable to all relevant parties—no black box excuses. 

The quality of data going into AI systems is crucial, an evolution of the longstanding GIGO mantra (garbage in, garbage out). Data quality needs to be defined and regularly reviewed for accuracy, completeness, timeliness and consistency. The authority states that AI-driven decisions should not discriminate or unintentionally show bias against any group of consumers. 

The use of AI tools should align to corporate values, ethical standards and uphold consumer protection principles. When AI-powered services are in use that fact should be made clear and the risks outlined before the service is provided. The Authority outlines requirements for periodic review to ensure applications continue to perform as intended—vital given the capacity for learning and adaptation. The need for risk management planning, especially in banking, is obvious, with risk limits, ‘humans in the loop’ and the ability to fall back to human-only systems all necessary.

In a detailed 2022 article, [5] the Harvard Law School Forum on Corporate Governance outlined three key steps board members need to take.

First, understand how widespread AI is already and the risks involved. This is a learning requirement.

Next, understand that AI Is not neutral or infallible. Human touch points will insert bias.

Finally, make a game plan. The Harvard article cites a recent report that indicates sixty percent of respondent companies could not explain how specific AI model predictions or decisions were made and only twenty percent actively monitor their models under development for fairness and ethics.

In a guest article in Information Weekly [6] Scott Zoldi [7] suggests that “to eliminate bias, boards must understand and enforce auditable, immutable AI model governance based on four classic tenets of corporate governance: accountability, fairness, transparency and responsibility”.

The detail he provides is aimed more at developers of AI tools, but the principles are nonetheless relevant. This direct quote is worth citing in full:

As a concerned citizen, I applaud boards waking up, and stepping up, to recognize the danger of unconstrained use of artificial intelligence. As a data scientist, I know that board oversight and government regulation of AI is necessary. Governance, not best intentions, is what keeps companies honest.

We suggest that all boards add to their policy suite a clause that addresses AI use in the organisation. We offer a possible draft below. This may not be perfect, but boards need to start somewhere even if just mandating when tools such as ChatGPT have been used.

Use of AI tools

The board has a core duty to maintain oversight of the use of AI tools within the organisation. Accordingly, the board will:

  1. ensure that all directors understand the implications of the growth in AI systems relevant to our organisation.
  2. ensure that use of such tools complies with applicable legislation and regulation.
  3. ensure that the use of such tools aligns to our values, ethical standards and upholds consumer protection principles.
  4. ensure that robust systems are designed so that the board will receive regular information in the form of written and verbal reports that makes clear:
    1. where AI systems are being used in the organisation
    2. that the use of such tools is transparent to staff and customers
    3. when information has been generated with the assistance of such tools and that information has been cross checked.
  5. ensure that AI systems are subject to regular review to ensure data accuracy, completeness, timeliness and consistency.
  6. ensure that AI systems do not exhibit bias towards any group.
  7. require that the risks posed by the use of such tools are included in the Organisation’s risk management processes and as such are reported regularly to the board.

ChatGPT’s view on AI

ChatGPT was recently asked what such a policy should include, and this was the answer (below).
It is arguable if ChatGPT is truly AI or just a very sophisticated search engine. But the results here are very aligned to the shorter statement above and were generated in rather less time.

[Your Organisation’s Name] AI Governance Policy

Effective Date: [Insert Date]

  1. Introduction

This AI Governance Policy outlines the principles, guidelines, and procedures that [Your Organisation’s Name] shall follow to ensure the responsible and ethical use of artificial intelligence (AI) technologies. The policy is designed to mitigate risks associated with AI, protect the rights and interests of stakeholders, and uphold our commitment to transparency and accountability.

  1. Purpose

The purpose of this policy is to:

2.1. define the principles that govern the use of AI technologies within [Your Organisation’s Name].
2.2. establish guidelines for responsible AI development, deployment and maintenance.
2.3. promote transparency and accountability in AI-related decision-making processes.
2.4. ensure compliance with applicable laws, regulations and ethical standards.

  1. Principles

3.1. Ethical use: [Your Organisation’s Name] commits to using AI technologies in a manner that upholds ethical standards, respects human rights, and avoids harm to individuals or communities.

3.2. Fairness: AI systems used by the organisation shall be designed and implemented to be fair and unbiased, avoiding discrimination and disparate impact.

3.3. Transparency: The organisation shall provide clear explanations of AI systems’ capabilities and limitations to stakeholders, as well as disclose the data sources and algorithms used.

3.4. Accountability: [Your Organisation’s Name] will establish clear lines of responsibility and accountability for AI development, deployment and maintenance.

3.5. Privacy and data protection: The organisation shall prioritise the protection of personal data and privacy when collecting, processing or storing data for AI purposes.

3.6. Security: AI systems shall be developed and maintained with robust security measures to prevent unauthorised access or malicious use.

  1. Governance structure

4.1. AI Ethics Committee: [Your Organisation’s Name] will establish an AI Ethics Committee responsible for overseeing and ensuring the adherence to AI principles. This committee will consist of experts in AI ethics, legal compliance and technology.

4.2. Data governance: The organisation shall have a dedicated Data Governance team to manage data used for AI and ensure data quality, security and compliance with relevant regulations.

  1. AI development and deployment

5.1. Due diligence: Before adopting AI technologies, [Your Organisation’s Name] shall conduct a thorough risk assessment, impact analysis and due diligence to ensure that AI systems align with the established principles.

5.2. Human oversight: AI systems shall be developed and used with human oversight, especially in sensitive decision-making processes.

5.3. Continuous monitoring: AI models and systems will undergo continuous monitoring to identify and rectify biases, error, and potential ethical concerns.

  1. Training and awareness

6.1. Training: All personnel involved in AI development and deployment shall receive training on AI ethics, privacy and security.

6.2. Awareness: [Your Organisation’s Name] will promote awareness among employees, stakeholders, and the public about the organisation’s AI policies and practices.

  1. Compliance and reporting

7.1. Compliance: [Your Organisation’s Name] shall comply with all applicable laws and regulations related to AI.

7.2. Reporting: The AI Ethics Committee shall provide periodic reports on AI governance, including any incidents, risks or mitigations, to the executive leadership and relevant stakeholders.

  1. Review and updates

This AI Governance Policy will be reviewed annually or as needed to ensure alignment with evolving best practices, laws, and ethical standards.

By adhering to this AI Governance Policy, [Your Organisation’s Name] is committed to the responsible and ethical use of AI technologies to benefit our stakeholders and society at large.

[Authorised Signatory] [Your Organisation’s Name] [Date]



Notes

This article was researched in part using ChatGPT. Any references have been cross checked

  1. Taken from a useful article by Willie Cheng in The Business Times 6/10/2023.
    The game changing role of ChatGPT on boards 
  2. Cunningham, L. Maskin, A. Carlson J. The Ups and Downs of generative AI for boards. Directors . Directors and Boards 25/9/2023. 
  3. https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai
  4. https://www.hkma.gov.hk/media/eng/doc/key-information/guidelines-and-circular/2019/20191101e1.pdf
  5. https://corpgov.law.harvard.edu/2022/01/05/board-responsibility-for-artificial-intelligence-oversight/
  6. https://www.informationweek.com/machine-learning-ai/establish-ai-governance-not-best-intentions-to-keep-companies-honest 
  7. Scott Zoldi is Chief Analytics Officer of FICO, predictive analytics and decision management software company.