Governance Lab
The Governance Lab is a sandbox environment for testing governance policies before deploying them to production.
Overview
Policy changes can have significant impact on how your AI agents operate. The Governance Lab lets you simulate actions against your current or draft policies, see what verdicts would be returned, and validate configurations -- all without affecting live governance.
Simulating Actions
To test a governance decision:
- Open the Governance Lab
- Define a simulated action:
- Agent -- Select a registered agent (or use a test agent)
- Action Type -- The action type to simulate
- Target Service -- The downstream service
- Confidence -- The confidence score to submit
- Reasoning -- Optional reasoning text
- Click Simulate
- Review the result: verdict, tier assignment, policies that fired, and governance reasoning
The simulation runs the action through the full governance pipeline but does not execute the action or create a real audit record.
TIP
Use simulation to answer questions like: "If my agent submits a send_email action with confidence 0.75, what happens?" This is faster and safer than testing against the live system.
Testing Draft Policies
The Governance Lab supports testing against draft policy configurations that have not been deployed yet:
- Create or modify a policy in draft mode
- Switch the Governance Lab to use the draft policy set
- Simulate actions to see how the draft policies would behave
- Compare results against the current production policies
- When satisfied, promote the draft policies to production
A/B Testing Policy Changes
Compare how the same action would be governed under two different policy configurations:
- Configure Policy Set A (e.g., current production policies)
- Configure Policy Set B (e.g., proposed changes)
- Simulate the same action against both sets
- Review the side-by-side comparison of verdicts, tier assignments, and reasoning
This is particularly useful when adjusting confidence floors, changing tier mappings, or adding new restriction policies.
Validating Policy Configurations
The lab includes a configuration validator that checks policy JSON for:
- Correct structure and required fields
- Valid values for known configuration options
- Conflicts between policies (e.g., two policies targeting the same action type with contradictory verdicts)
- Missing dependencies (e.g., a tier mapping that references a nonexistent policy)
WARNING
Validation catches structural issues but cannot predict all behavioral outcomes. Always run simulations with realistic scenarios after validation passes.
Use Cases
| Scenario | How the Lab Helps |
|---|---|
| Adding a new confidence floor policy | Simulate actions at various confidence levels to see which would be held |
| Changing a tier mapping from B to A | Test that the action is correctly cleared without unintended side effects |
| Onboarding a new agent | Simulate the agent's expected action types to verify policies behave as intended |
| Compliance pack evaluation | Apply a draft compliance pack and test representative actions |
Related Features
- Governance Policies -- Create and manage the policies you test in the lab
- Governance Replay -- Replay historical actions against current or draft policies
- Threat Simulation -- Test governance against adversarial attack scenarios