Skip to content

Governance Lab

The Governance Lab is a sandbox environment for testing governance policies before deploying them to production.

Overview

Policy changes can have significant impact on how your AI agents operate. The Governance Lab lets you simulate actions against your current or draft policies, see what verdicts would be returned, and validate configurations -- all without affecting live governance.

Simulating Actions

To test a governance decision:

  1. Open the Governance Lab
  2. Define a simulated action:
    • Agent -- Select a registered agent (or use a test agent)
    • Action Type -- The action type to simulate
    • Target Service -- The downstream service
    • Confidence -- The confidence score to submit
    • Reasoning -- Optional reasoning text
  3. Click Simulate
  4. Review the result: verdict, tier assignment, policies that fired, and governance reasoning

The simulation runs the action through the full governance pipeline but does not execute the action or create a real audit record.

TIP

Use simulation to answer questions like: "If my agent submits a send_email action with confidence 0.75, what happens?" This is faster and safer than testing against the live system.

Testing Draft Policies

The Governance Lab supports testing against draft policy configurations that have not been deployed yet:

  1. Create or modify a policy in draft mode
  2. Switch the Governance Lab to use the draft policy set
  3. Simulate actions to see how the draft policies would behave
  4. Compare results against the current production policies
  5. When satisfied, promote the draft policies to production

A/B Testing Policy Changes

Compare how the same action would be governed under two different policy configurations:

  1. Configure Policy Set A (e.g., current production policies)
  2. Configure Policy Set B (e.g., proposed changes)
  3. Simulate the same action against both sets
  4. Review the side-by-side comparison of verdicts, tier assignments, and reasoning

This is particularly useful when adjusting confidence floors, changing tier mappings, or adding new restriction policies.

Validating Policy Configurations

The lab includes a configuration validator that checks policy JSON for:

  • Correct structure and required fields
  • Valid values for known configuration options
  • Conflicts between policies (e.g., two policies targeting the same action type with contradictory verdicts)
  • Missing dependencies (e.g., a tier mapping that references a nonexistent policy)

WARNING

Validation catches structural issues but cannot predict all behavioral outcomes. Always run simulations with realistic scenarios after validation passes.

Use Cases

ScenarioHow the Lab Helps
Adding a new confidence floor policySimulate actions at various confidence levels to see which would be held
Changing a tier mapping from B to ATest that the action is correctly cleared without unintended side effects
Onboarding a new agentSimulate the agent's expected action types to verify policies behave as intended
Compliance pack evaluationApply a draft compliance pack and test representative actions

AI Governance for Every Organization