AI in the boardroom – essential things for directors to consider

As artificial intelligence tools become embedded in corporate activities, directors are also tempted to utilize them in the boardroom. Potentially, they can be used to summarize documents, research board issues, or prepare meeting notes. 

Yet this convenience brings risks. Directors occupy a unique fiduciary position. They hold confidential, price-sensitive, and often privileged information. Using public AI tools without safeguards could potentially expose that information to unauthorized parties or regulatory scrutiny.

The most immediate risk comes from uploading board materials into public chatbots. These platforms, unless specifically approved and secured by the company, may store and learn from user inputs. In some systems, engineers have access to the data. In others, the information may become part of future outputs generated for different users, including competitors. 

A director who asks an AI to “summarize next week’s board pack” could therefore be sharing sensitive data with unknown parties. Beyond confidentiality breaches, this could violate contractual obligations, privacy laws, or company policies that prohibit the external sharing of non-public information. 

Even apparently innocuous AI interactions could later be exposed through legal investigations or regulatory demands. As with emails or text messages, chats with AI tools may be stored on the provider’s servers and could be retrieved under subpoena. Especially in the context of the litigious US market environment, an informal request to a chatbot for advice about a proposed merger could resurface in antitrust proceedings or shareholder litigation. Regulators could treat such material as evidence of intent or knowledge. 

Directors should therefore assume that anything entered into an AI chat could one day be read in court or used by enforcement authorities, and use these tools accordingly.

Another significant hazard involves using AI for recording or transcribing sensitive meetings. While AI transcription tools can be helpful in many operational settings, they pose considerable risks when applied to board discussions or communications with legal advisers. 

Many such tools retain audio data and transcripts on third-party servers, where they could be accessed or demanded by outsiders. The raw content of boardroom conversations, often containing strategic debate, dissent, or early-stage ideas, must remain tightly controlled. Moreover, recording or transcribing privileged discussions with lawyers through third-party services could destroy attorney-client privilege, stripping the company of vital legal protection. 

To address these risks, several proprietary platforms have emerged to offer secure, governance-grade alternatives. Beyond general-purpose AI tools, solutions such as the ones developed by Nasdaq, Board Intelligence, Diligent, and DiliTrust have begun embedding artificial intelligence within encrypted board portals. 

These systems are specifically designed for directors and governance professionals, and enable them to analyze board materials, generate summaries, and identify trends within a secure environment that complies with corporate confidentiality and data protection standards. 

Apart from these secure applications designed to enhance board efficiency, some companies are pushing the boundaries of what AI can do in governance. One of the most notable experiments is taking place in the United Arab Emirates. 

In 2024, Abu Dhabi’s International Holding Company (IHC) appointed a virtual AI entity called Aiden Insight as a board observer (non-voting). It is intended to support the board by performing tasks such as real-time data analysis, compliance monitoring, and generating strategic insights. 

More recently, Al Seer Marine introduced NOVA (Neural Oversight & Virtual Automation) as an AI non-voting board observer. It is designed to support the board by modeling scenarios, providing budgetary assistance, overseeing operations, and conducting subsidiary analysis. 

Both Aiden Insight and NOVA operate at an early stage of what some experts describe as the “AI autonomy spectrum” in governance. While these systems remain at the level of supporting human directors rather than replacing them, their existence raises new questions about accountability, transparency, and fiduciary responsibility as AI capabilities continue to evolve.

Directors must also remember that AI’s output is not infallible. Despite their sophistication, models can “hallucinate” or generate false information, misread context, and reproduce outdated or biased content from their training data. 

A director relying on AI to interpret financial ratios, summarize regulations, or evaluate an acquisition target risks serious error if the model’s data cutoff date predates recent developments. Alternatively, its underlying reasoning may be flawed. Every AI-generated output must therefore be treated as a draft requiring verification. Directors should verify sources, cross-check facts, and ensure that any analysis is based on up-to-date and reliable information.

Ultimately, AI should serve as an aid to human judgment, not a substitute for it. Chatbots and analytical tools can be valuable for brainstorming, structuring arguments, or providing a “second opinion”. But critical corporate decisions must remain under human oversight. 

For these reasons, boards would be wise to develop formal policies governing the use of AI in their work. Such frameworks should identify approved tools, define acceptable uses, require disclosure of AI-assisted analysis, and reinforce prohibitions on sharing confidential data with public systems. 

As AI adoption accelerates, boards will increasingly need to distinguish between trustworthy, enterprise-grade systems and uncontrolled public tools. Those that strike the right balance, leveraging AI’s power without compromising confidentiality or accountability, will be best positioned to lead responsibly in the digital era.

Dr. Roger Barker

Chief Research and Thought Leadership Officer, Center for Governance

 

Armando Cruz Maria

Research and Thought Leadership Assistant Director

 

Post script

Edwin Drukarch and Eduard Fosch-Villaronga from Leiden University have developed a useful taxonomy for how directors might use AI in different circumstances. In most cases, board deliberations will fall squarely into the Layer 1 or Layer 2 categories.

Layer

Description / Role of AI

Role of Human Directors

Key Issues / Legal Risks

Layer 1: Assistive/ advisory

AI acts purely as an advisory or support tool (e.g., data analysis, scenario simulation)

Humans remain in complete control of decisions

Transparency, explainability, and ensuring no “hidden bias” in AI inputs

Layer 2: Decision support with human oversight

AI proposes decisions or courses of action, but requires human validation/approval

Humans still must actively consent, override, or refine AI output

Accountability for when a human accepts flawed AI advice

Layer 3: Semi-autonomous decision-making

AI can make confident decisions within constrained domains or under rules, possibly subject to human veto

Human oversight must be present, possibly ex-post review

Delegation boundaries, liability for decisions made autonomously

Layer 4: Autonomous decision-making with constraints

AI can operate more independently within predetermined constraints or “guardrails”

Human role shifts to specifying constraints, policy, and monitoring

How constraints are designed, enforced, and the responsibility when constraints are breached

Layer 5: Fully autonomous (with monitoring)

AI acts independently, makes decisions across domains, with humans mainly in a monitoring/auditing role.

Human directors act as auditors and intervene when issues arise

Responsibility gap, attribution of liability, legal personhood, system governance

Layer 6: Autonomous governance / self-governing systems

AI may govern itself in some respects (e.g., self-adjusting policies, self-learning)

Human intervention is minimal or reactive

Fundamental legal challenges: agency, legal personhood, “who is responsible” for harm?

Source: Drukarch, E. and Fosch-Villaronga, E. (2022), The Role and Legal Implications of Autonomy in AI-Driven Boardrooms, in F. M. Belloir (ed.) Law and Artificial Intelligence, Asser Press, The Hague, pp. 141–166.

 

 

 

Disclaimer: The views expressed in this article are those of the author and do not necessarily reflect the opinion or position of the Center for Governance.