Comparison
Constellation vs AI Governance Platforms
AI governance platforms — Arthur AI, Credo AI, Holistic AI, IBM OpenPages AI Governance — solve an important problem: monitoring AI model behaviour, detecting bias, and managing AI risk. Constellation solves a different problem: governing institutional action at the moment it happens, whether that action is taken by a human or an AI agent.
What AI governance platforms do
AI governance platforms are model-centric. They:
- •Monitor ML model performance, drift, and degradation
- •Detect and measure algorithmic bias across protected categories
- •Assess AI risk against regulatory frameworks (EU AI Act, NIST AI RMF)
- •Generate model cards and explainability reports
- •Track model lineage, versioning, and deployment inventories
- •Manage AI policy documents and attestation workflows
This is valuable work. If you deploy machine learning models, you need tooling like this. The question is whether this is the same problem as institutional governance.
The scope difference
| AI Governance Platforms | Constellation | |
|---|---|---|
| Governs | AI models | Institutional action |
| Scope | Model behaviour & outputs | All consequential action (human + AI) |
| Question | Is this model performing fairly? | Is this action institutionally legitimate? |
| Enforcement | Alerts, dashboards, reports | Check / escalate / block + trace |
| Timing | Continuous monitoring (after deployment) | Moment of action (before execution) |
| Context | Model metrics & training data | Decisions, commitments, authority, precedent |
| Human loop | Data science team reviews | Escalation to institutional authority |
AI governance platforms ask: “Is this model behaving correctly?” Constellation asks: “Is this action legitimate given what the organisation has decided?”
AI monitoring vs institutional governance
Consider a scenario: your AI agent approves a $200,000 vendor contract.
AI governance platform
“The model’s contract scoring output is within expected confidence bounds. No bias detected across vendor demographics.”
Model performance layer
Constellation
“This exceeds the $150K threshold requiring CFO approval. Escalating. The board resolved in Q3 that vendor contracts above $100K need two signatories.”
Institutional governance layer
The model behaved perfectly. The action was still institutionally illegitimate. AI governance platforms would see nothing wrong. Constellation would catch it.
What AI governance platforms cannot do
AI governance platforms are designed for model oversight. They cannot:
- •Evaluate whether an AI agent’s action conflicts with a board resolution or institutional commitment
- •Enforce spending thresholds, approval sequences, or delegation boundaries in real-time
- •Intercept MCP tool calls at the moment of execution and check them against institutional constraints
- •Route escalations to the appropriate human authority with full decision context
- •Build institutional precedent from past governance decisions
- •Allow anyone in the organisation to challenge a constraint through a formal contestation process
These aren’t shortcomings. AI governance platforms weren’t built for institutional governance. They were built for model governance. The distinction matters.
The missing layer
AI governance platforms assume that if the model is performing well, the outputs are safe to act on. But institutional legitimacy is not a model performance metric.
An AI agent can produce a perfectly unbiased, high-confidence recommendation that violates an institutional commitment, exceeds delegated authority, or contradicts an active board resolution. No amount of model monitoring will catch this because the problem is not in the model — it’s in the gap between model output and institutional action.
This is the layer Constellation occupies. It sits between AI capability and institutional consequence — ensuring that what an AI agent can do is checked against what the organisation has decided it should do.
// Where each layer sits
AI Model
↓
AI Governance (Arthur, Credo, Holistic, IBM)
↓
Agent Action Surface
↓
Institutional Governance (Constellation)
↓
Institutional Consequence
Bottom line
AI governance platforms and Constellation are complementary. AI governance ensures models behave well. Constellation ensures institutions act legitimately — regardless of whether the action was triggered by a human or an AI agent.
Commercial competitor?
No
Category overlap?
Keyword only
Architectural overlap?
None
If you use AI agents that take consequential actions — approving expenditure, publishing communications, modifying contracts, triggering workflows — you need both layers. AI governance watches the model. Constellation governs the institution.
Constellation is not AI model monitoring. It’s institutional governance infrastructure — governing what happens after AI produces its output, at the moment the institution is about to act.