The Board's AI Blind Spot: What Directors Don't Know About Agent Governance

Most boards think AI governance means ethics policies and responsible AI statements. What it actually requires is moment-of-action enforcement for autonomous agents making decisions on behalf of the organisation.

Roshan Ghadamian·

What Boards Think AI Governance Is

Ask a board of directors about their AI governance and you will hear a familiar set of responses. We have a **responsible AI policy**. We have appointed a **chief AI officer** or added AI oversight to an existing committee's remit. We have published **ethical AI principles**. We require **human-in-the-loop** review for high-stakes decisions. We have conducted an **AI risk assessment**.

These are not wrong. They are **incomplete** in a way that creates significant liability exposure.

The board's understanding of AI governance is shaped by the governance tools they know: policies, oversight structures, risk assessments, and reporting frameworks. These tools were designed for a world where humans make decisions and AI provides recommendations. In that world, governing AI means governing the humans who use it — ensuring they apply appropriate judgment, follow policies, and maintain accountability.

But that world is disappearing. **AI agents** — autonomous systems that take actions on behalf of the organisation without human review of each action — are already operating in most large enterprises. They approve transactions, generate communications, modify systems, commit resources, and interact with customers and partners. Each of these actions is a decision made under the organisation's authority.

The board's AI governance framework, designed for AI-as-tool, does not address AI-as-agent. This is the blind spot, and it is growing wider every month as agent deployment accelerates.

The Agent Governance Problem

An AI agent differs from an AI tool in one critical respect: **it acts**. A tool provides output that a human evaluates and acts upon. An agent evaluates and acts itself, often without any human reviewing the specific action before it occurs.

This distinction has profound governance implications that most boards have not fully confronted.

Authority delegation. When an AI agent acts on behalf of the organisation, it exercises delegated authority. But under what delegation? Most delegation schedules were written for humans and define authority in terms of roles, reporting lines, and approval thresholds. They do not contemplate an autonomous system that makes hundreds of decisions per hour. The agent is either operating under an implicit delegation that was never formally granted, or it is operating outside the delegation framework entirely. Both are governance failures.

Constraint enforcement. Human employees are expected to know and follow organisational policies and constraints. This expectation is already imperfect for humans, but it is at least theoretically achievable — a person can read a policy and decide to follow it. An AI agent cannot "decide to follow" a policy unless that policy is encoded as a constraint the agent checks before acting. If the constraint is in a PDF on SharePoint, it does not exist for the agent.

Decision traceability. When a human makes a decision, you can ask them why. They may not give a good answer, but the question is at least meaningful. When an AI agent makes a decision, traceability requires purpose-built infrastructure — decision traces that capture what the agent considered, what constraints it checked, what authority it operated under, and what outcome it produced. Without this infrastructure, the agent's decisions are opaque in a way that human decisions are not.

Cumulative impact. A human making decisions one at a time is unlikely to create large-scale problems without someone noticing. An AI agent making hundreds of decisions per hour can create systemic effects — contradictory commitments, threshold violations, strategic drift — that no individual decision triggers but that the aggregate produces. Governing agents requires monitoring aggregate effects, not just individual actions.

The Liability Gap

Directors have fiduciary duties — duties of care, diligence, and loyalty that require them to exercise informed judgment in the organisation's interests. These duties do not disappear when decisions are made by AI agents. If anything, they become more demanding.

A director who allows AI agents to make consequential decisions without governance infrastructure is making a governance decision — the decision to permit ungoverned delegation of organisational authority to autonomous systems. This is not meaningfully different from allowing an employee to exercise authority without any oversight, delegation framework, or accountability mechanism. No director would accept that for a human. Many are currently accepting it for AI agents.

The liability gap has several dimensions:

Informed judgment. Directors are expected to be informed about the organisation's material risks. AI agents making autonomous decisions are a material risk. If the board has not asked what governance infrastructure exists for these agents, it may not be exercising informed judgment.

Supervision. Directors are expected to establish and maintain adequate systems of supervision. If AI agents operate outside the organisation's governance framework — outside delegation schedules, outside constraint enforcement, outside decision tracing — the system of supervision has a significant gap.

Response to known risks. Once a board is aware that AI agents are making ungoverned decisions, failure to address the gap becomes a conscious choice. Regulators and courts are increasingly likely to view this as a breach of the duty of care.

The liability is not theoretical. As AI agents become more capable and more widely deployed, the probability of a governance failure with material consequences increases. When that failure occurs, the question will be: **did the board have governance infrastructure in place for its AI agents, or did it rely on ethics policies and hope?**

What Directors Should Ask

Directors do not need to understand the technical details of AI agents. They need to ask the right questions and evaluate the answers with the same rigor they apply to financial governance, risk management, and compliance.

Here are the questions that matter:

"What AI agents are currently operating in our organisation, and what decisions are they making?" If the answer is uncertain or incomplete, the board does not have visibility into a significant area of organisational activity. This is a governance gap, regardless of the technology involved.

"Under what delegated authority do these agents operate?" If the answer references the delegation schedule but the delegation schedule does not specifically address AI agents, the agents are operating under an implicit delegation that was never formally granted. This should be made explicit.

"What constraints do these agents check before acting?" If the answer is "they follow our policies," ask how. If the policies are not encoded as machine-readable constraints that the agent checks in real time, the agent is not following them. It is operating without constraint.

"Can you show me the governance trace for a specific agent decision?" If a trace exists — showing what the agent considered, what constraints it checked, what authority it operated under, and what outcome it produced — governance infrastructure is in place. If not, the agent's decisions are ungoverned.

"What happens when an agent's decision exceeds its delegated authority or conflicts with an existing constraint?" If the answer is "we review it in the next quarterly audit," the governance is retrospective and cannot prevent harm. If the answer is "the system blocks the action and escalates it," governance infrastructure is operating.

"How do we detect when agents are making contradictory decisions across the organisation?" If there is no mechanism for detecting aggregate effects, the organisation is exposed to systemic risks that no individual decision would trigger.

What Good Looks Like

Good AI agent governance does not look like a policy document. It looks like **infrastructure** — a system that operates continuously, enforces constraints at the moment of action, and produces traces that demonstrate governance was active in every agent decision.

Specifically, good agent governance includes:

Explicit delegation. Every AI agent operates under a formally defined delegation that specifies what it can do, within what parameters, with what escalation triggers. The delegation is not a policy document — it is an executable specification that the system enforces.

Moment-of-action enforcement. Before an agent acts, the governance system checks: is this action within the agent's delegated authority? Does it conflict with any active constraint? Does it exceed any threshold that requires human review? If the action is permitted, it proceeds with a full trace. If not, it is blocked and escalated.

Decision traces. Every agent action produces a structured, immutable record: what action was taken, under what authority, what constraints were checked, what the outcome was. These traces serve operational, legal, and audit purposes simultaneously.

Aggregate monitoring. Beyond individual actions, the system monitors for patterns: contradictory decisions, threshold creep, strategic drift, concentration risks. These aggregate effects are surfaced to human governance structures for review and response.

Contestability. Any stakeholder can challenge an agent's decision through a formal process. The challenge is evaluated against the governance trace, and if the decision was outside bounds, remediation occurs. This is not an ethics review board — it is a structured process that operates continuously.

This is what Constellation provides for AI agent governance. Not a policy template. Not a risk assessment framework. Infrastructure that governs agents in real time, at the moment of action, with full traceability and contestability.

Directors who understand this distinction — between governance as a document and governance as infrastructure — will be better positioned to fulfil their fiduciary duties in an era of autonomous agents. Those who do not will be relying on ethics policies to govern systems that cannot read them.

The Window Is Closing

There is a window of opportunity for boards to establish AI agent governance infrastructure before a governance failure forces the issue. That window is not long.

AI agent deployment is accelerating. The capabilities of agents are expanding rapidly — from simple automation to complex, multi-step reasoning and action. The scope of decisions being delegated to agents is growing. And regulatory attention to AI governance is intensifying, with the EU AI Act, various national frameworks, and sector-specific requirements creating expectations that boards will need to demonstrate they have met.

Boards that act now can establish governance infrastructure on their own terms, at their own pace, and shaped by their own institutional needs. Boards that wait will be forced to act reactively — likely after a governance failure has created reputational, financial, or legal consequences, and under regulatory pressure that constrains their options.

The cost of establishing agent governance infrastructure is modest relative to the risk it mitigates. The cost of not establishing it is potentially existential — not because AI agents are inherently dangerous, but because ungoverned agents are ungoverned decision-makers acting under organisational authority.

Directors who would never tolerate ungoverned human decision-making should apply the same standard to agents. The fiduciary duty is the same. The governance requirements are analogous. The only difference is that agents require governance infrastructure rather than governance policy, because agents cannot read policies.

The blind spot is not about AI. It is about governance. And boards that close it will be better governed in every dimension — not just for AI, but for the full scope of organisational decision-making that governance infrastructure enables.

See governance infrastructure in action

Constellation enforces corporate governance at the moment of action — for both humans and AI agents.