Skip to content

L6: Model Governance

L6 is the model supply chain governance layer. It controls which AI models your organization can use, how they are accessed, what data they are allowed to process, and how much they cost. L6 treats AI models the same way mature organizations treat software vendors -- with a registry, access policies, agreement tracking, and spend controls.

Why Model Governance?

Most organizations today have no idea how many AI models are being used across their teams. Marketing uses one provider, engineering uses another, and the analytics team just signed up for a third. Each model has different data handling terms, different security postures, and different cost structures.

L6 brings order to this by providing a single source of truth for your model supply chain.

Model Registry

The Model Registry is the authoritative list of AI models known to your organization. Every model -- whether approved, blocked, deprecated, or pending review -- has a registry entry.

Registry Fields

FieldDescriptionExample
Model IDUnique identifieropenai/gpt-4o
ProviderModel provider or vendorOpenAI
VersionSpecific model version2024-08-06
StatusGovernance statusAPPROVED, BLOCKED, DEPRECATED, PENDING
Access ChannelHow the model is accessedAPI Direct, Azure OpenAI, AWS Bedrock
DPA DateData Processing Agreement effective date2025-11-15
BAA DateBusiness Associate Agreement effective date2025-11-15
Cost per TokenInput and output token pricing$2.50 / $10.00 per 1M tokens
Data ClassificationMaximum data classification allowedCONFIDENTIAL
TagsOrganizational tagsproduction, healthcare-approved

Model Statuses

APPROVED    -- Model is cleared for use under active policies
BLOCKED     -- Model is explicitly prohibited
DEPRECATED  -- Model was previously approved but is being phased out
PENDING     -- Model is under review; not yet approved or blocked

BLOCKED vs. PENDING

A BLOCKED model has been reviewed and explicitly denied. A PENDING model has not yet been reviewed. Both are unavailable for use, but the distinction matters for audit purposes.

MAP Policies (Model Access Policies)

MAP Policies are the rules that govern how models are accessed and used. There are five policy types, each addressing a different governance dimension.

1. MODEL_ALLOWLIST

Controls which models are approved for use. Any model not on the allowlist is denied by default.

json
{
  "policy_type": "MODEL_ALLOWLIST",
  "name": "Production Approved Models",
  "models": [
    "openai/gpt-4o",
    "openai/gpt-4o-mini",
    "anthropic/claude-sonnet-4-20250514",
    "google/gemini-1.5-pro"
  ],
  "enforcement": "BLOCK_UNLISTED"
}

2. CHANNEL_ENFORCEMENT

Requires that models are accessed through specific, approved channels -- not direct API keys floating in code.

json
{
  "policy_type": "CHANNEL_ENFORCEMENT",
  "name": "Azure-Only for OpenAI Models",
  "rules": [
    {
      "provider": "OpenAI",
      "required_channel": "Azure OpenAI",
      "reason": "Enterprise agreement requires Azure deployment for data residency"
    }
  ]
}

Why Channel Enforcement?

Direct API access to a model provider may mean your data is processed under consumer terms of service. Enterprise channels (Azure OpenAI, AWS Bedrock, Google Vertex) typically offer stronger data handling agreements, regional deployment, and audit capabilities.

3. AGREEMENT_REQUIRED

Requires valid legal agreements (DPA, BAA) before a model can be used. If agreements are expired or missing, the model is blocked.

json
{
  "policy_type": "AGREEMENT_REQUIRED",
  "name": "Require Active DPA",
  "requirements": [
    {
      "agreement_type": "DPA",
      "status": "ACTIVE",
      "max_age_days": 365
    },
    {
      "agreement_type": "BAA",
      "status": "ACTIVE",
      "applies_to": "models handling PHI"
    }
  ]
}

4. DATA_CLASS_MODEL_MAP

Restricts which models can handle which data classifications. Prevents sensitive data from being sent to models without appropriate security controls.

json
{
  "policy_type": "DATA_CLASS_MODEL_MAP",
  "name": "Data Classification Routing",
  "mappings": [
    {
      "data_class": "PUBLIC",
      "allowed_models": ["*"]
    },
    {
      "data_class": "INTERNAL",
      "allowed_models": ["openai/gpt-4o", "anthropic/claude-sonnet-4-20250514"]
    },
    {
      "data_class": "CONFIDENTIAL",
      "allowed_models": ["openai/gpt-4o"],
      "required_channel": "Azure OpenAI"
    },
    {
      "data_class": "PII",
      "allowed_models": ["openai/gpt-4o"],
      "required_channel": "Azure OpenAI",
      "required_agreements": ["DPA"]
    },
    {
      "data_class": "PHI",
      "allowed_models": ["openai/gpt-4o"],
      "required_channel": "Azure OpenAI",
      "required_agreements": ["DPA", "BAA"]
    },
    {
      "data_class": "PCI",
      "allowed_models": [],
      "note": "No models approved for PCI data processing"
    }
  ]
}

5. COST_GOVERNANCE

Controls AI spending with budgets, alerts, and hard limits.

json
{
  "policy_type": "COST_GOVERNANCE",
  "name": "Monthly Spending Controls",
  "budgets": [
    {
      "scope": "organization",
      "monthly_limit": 50000,
      "alert_at": [0.50, 0.75, 0.90],
      "action_at_limit": "BLOCK"
    },
    {
      "scope": "department:engineering",
      "monthly_limit": 20000,
      "alert_at": [0.75, 0.90],
      "action_at_limit": "HOLD"
    },
    {
      "scope": "user",
      "daily_limit": 100,
      "action_at_limit": "BLOCK"
    }
  ]
}

Shadow Detection

L6 shadow detection identifies governance policy violations:

Detection TypeWhat It Catches
Unapproved ModelA model not in the registry or not in APPROVED status
Channel BypassAn approved model accessed through an unapproved channel
Agreement GapA model used without required legal agreements
Data Class ViolationSensitive data sent to a model not cleared for that classification
Budget ExceededUsage that exceeds cost governance limits

Shadow events are logged with full context -- the user, the model, the data classification, and the policy that was violated.

Shadow Events Demand Attention

Shadow events indicate either a policy gap (the user needed a model that is not yet approved) or a security concern (the user is intentionally bypassing governance). Both require investigation.

Audit Trail

L6 maintains its own hash-chained audit records, separate from but integrated with the global audit trail. Every governance decision is recorded:

L6 Audit Record:
  Timestamp:    2026-04-10T14:22:03Z
  Action:       model.inference
  User:         eng-team/alice
  Model:        openai/gpt-4o
  Channel:      Azure OpenAI
  Data Class:   INTERNAL
  Policies:     MODEL_ALLOWLIST(PASS), CHANNEL_ENFORCEMENT(PASS),
                AGREEMENT_REQUIRED(PASS), DATA_CLASS_MODEL_MAP(PASS),
                COST_GOVERNANCE(PASS)
  Verdict:      APPROVED
  Hash:         0x7f2a...
  Prev Hash:    0x3e91...

Operating Modes

ModeVerdict BehaviorUse Case
VISIBILITYAll actions logged, no enforcement. Verdicts are always APPROVED.Discovery phase -- understand your model landscape before setting policies.
ADVISORYPolicy violations produce HELD verdicts. Actions are paused for human review.Policy tuning -- validate that policies are correct before hard enforcement.
ENFORCEMENTPolicy violations produce BLOCKED verdicts. Actions are denied immediately.Production governance -- unauthorized model access is prevented.

Platform Connectors

L6 integrates with major AI platforms to monitor and enforce model access:

PlatformCapabilities
Azure OpenAIDeployment inventory sync, usage telemetry, content filter status, regional deployment tracking
AWS BedrockModel access audit, invocation logging, VPC endpoint verification, cross-region detection
Google Vertex AIModel Garden access tracking, endpoint monitoring, IAM integration, data residency verification
OpenAI DirectAPI key inventory, usage monitoring, organization-level spend tracking, model access detection

Connectors are configured in the Settings tab with API credentials and sync frequency.

Console Tabs

The L6 console has six tabs:

1. Dashboard

High-level overview of your model governance posture:

  • Registered Models -- total count and breakdown by status (APPROVED / BLOCKED / DEPRECATED / PENDING)
  • Active Policies -- number of MAP policies in effect
  • Agreement Coverage -- percentage of active models with valid DPA/BAA
  • Shadow Events -- count of policy violations in the current period
  • Top Models by Usage -- ranked by token volume or request count
  • Cost Tracking -- current spend vs. budget, burn rate, projected month-end

2. Model Registry

Full CRUD interface for model entries. Add new models, update statuses, record agreement dates, set data classification limits. Bulk import via CSV.

3. Policies

Create, edit, and manage MAP policies. Each policy shows its type, scope, current enforcement mode, and hit count. Test policies against historical data before activating.

4. Usage & Cost

Detailed usage analytics:

  • Token consumption by model, department, and user
  • Cost breakdown with trend charts
  • Budget utilization gauges
  • Anomaly detection for unusual spending patterns

5. Shadow Alerts

List of shadow detection events with severity, timestamp, user, model, and violated policy. Each alert can be:

  • Acknowledged -- investigated and documented
  • Resolved -- model added to registry or user retrained
  • Escalated -- forwarded to security or compliance team

6. Settings

  • Platform connector configuration
  • Sync frequency and credential management
  • Default policies for new models
  • Notification preferences
  • Operating mode selection (VISIBILITY / ADVISORY / ENFORCEMENT)

Integration with /govern

L6 participates in the governance pipeline via the metadata.model_id field:

POST /govern
{
  "action": "model.inference",
  "metadata": {
    "model_id": "openai/gpt-4o",
    "channel": "azure_openai",
    "data_classification": "CONFIDENTIAL",
    "department": "engineering",
    "user_id": "eng-team/alice"
  },
  "payload": {
    "prompt": "..."
  }
}

L6 evaluates the model_id against all active MAP policies and returns its verdict. This verdict is combined with other active layers (L1, L2, L4, etc.) to produce the final governance decision.

AI Governance for Every Organization