L7: Shadow AI Detection
L7 discovers unauthorized AI usage across your organization. Shadow AI is any AI tool, model, or service used without organizational awareness, approval, or governance. L7 finds it, classifies the risk, and recommends governance actions.
What Is Shadow AI?
Shadow AI is the AI equivalent of shadow IT -- tools adopted by individuals or teams without going through procurement, security review, or governance approval. Examples include:
- An employee pasting customer data into ChatGPT in a browser tab
- A developer using a personal OpenAI API key in production code
- A marketing team signing up for an AI copywriting tool with a corporate credit card
- A contractor using an AI code assistant that sends code to an external model
- An analyst uploading a spreadsheet to an AI data analysis platform
The Scope of the Problem
Industry surveys consistently show that 60-80% of AI tools used in enterprises are unmanaged. These tools process sensitive data under consumer terms of service, with no audit trail, no data handling agreements, and no organizational visibility.
Capabilities
Network Scanning for AI API Calls
L7 monitors network traffic for outbound connections to known AI service endpoints:
| Provider | Detected Endpoints |
|---|---|
| OpenAI | api.openai.com, chat.openai.com |
| Anthropic | api.anthropic.com, claude.ai |
generativelanguage.googleapis.com, gemini.google.com | |
| Mistral | api.mistral.ai, chat.mistral.ai |
| Cohere | api.cohere.ai |
| Hugging Face | api-inference.huggingface.co |
| Replicate | api.replicate.com |
| Custom/Self-hosted | Configurable endpoint patterns |
Traffic analysis identifies:
- Which AI services are being contacted
- Volume and frequency of requests
- Data payload sizes (indicating potential sensitive data transfer)
- Source users and departments
Browser Extension Monitoring
The TheWARDN browser extension (Chrome/Edge, MV3) provides visibility into browser-based AI usage:
- Detects navigation to AI web applications
- Monitors clipboard activity when AI tools are in focus
- Identifies browser extensions that interact with AI services
- Tracks copy/paste of sensitive data into AI interfaces
Shadow AI Detection:
Source: Browser (Chrome)
User: marketing/sarah
Tool: jasper.ai
Activity: Text input (paste from clipboard)
Data Size: 4,200 characters
Risk: MEDIUM -- tool not in approved registry
Recommended Action: Review jasper.ai for governance onboardingBrowser Extension Is Opt-In Visibility
The browser extension provides visibility without blocking. It reports what AI tools are being used so governance teams can make informed decisions about which tools to approve, restrict, or replace with governed alternatives.
Desktop Agent Discovery
L7 discovers AI-powered desktop applications and agents running on managed endpoints:
- AI code assistants (GitHub Copilot, Cursor, Cody, Continue)
- AI writing tools (Grammarly AI, Notion AI, Otter.ai)
- AI image generators (Midjourney, DALL-E desktop clients)
- Local LLM runners (Ollama, LM Studio, llama.cpp)
- Custom AI agents and automation tools
Discovery is performed through:
- Process enumeration on managed devices
- Application inventory integration (SCCM, Intune, Jamf)
- Network connection correlation with known AI endpoints
Unauthorized Tool Detection
L7 maintains a continuously updated database of AI tools and services. When a new tool is detected, it is classified against the organization's governance policies:
| Classification | Meaning | Action |
|---|---|---|
| Approved | Tool is in the governance registry with active policies | No action needed |
| Known - Unapproved | Tool is recognized but not yet approved | Flag for governance review |
| Unknown | Tool is not in the database | Flag as high priority for investigation |
| Blocked | Tool has been explicitly prohibited | Alert security team |
Risk Classification
When L7 detects shadow AI usage, it assigns a risk level based on multiple factors:
Risk Assessment: shadow_event_20260410_0847
Tool: copy.ai
Data Sensitivity: Unable to determine (no DLP integration)
Agreement Status: No DPA on file
User Count: 3 users detected
Data Volume: ~12,000 tokens/day
Regulatory Impact: MEDIUM (marketing content, no PII detected)
Overall Risk: MEDIUM
Recommendations:
1. Contact copy.ai for enterprise agreement and DPA
2. Evaluate as approved tool under MODEL_ALLOWLIST
3. If approved, configure L6 MAP policies for ongoing governance
4. If denied, notify affected users and provide approved alternativeRisk factors include:
- Data sensitivity -- is sensitive data being sent to the tool?
- Agreement status -- does the organization have a DPA/BAA with the provider?
- User count -- how widespread is adoption?
- Data volume -- how much data is being processed?
- Regulatory exposure -- does usage implicate specific regulations?
- Tool reputation -- is the tool from a known, reputable provider?
Console Features
Shadow AI Dashboard
High-level view of unauthorized AI activity:
- Total shadow tools detected -- unique tools found outside governance
- Active shadow users -- number of users with shadow AI activity
- Risk distribution -- breakdown by risk level (Critical / High / Medium / Low)
- Trend chart -- shadow AI activity over time (ideally trending down as tools are governed)
- New detections -- tools detected for the first time in the current period
Detection Feed
Real-time feed of shadow AI events:
- Tool name and provider
- User and department
- Activity type (API call, web app, desktop agent)
- Data volume estimate
- Risk classification
- Recommended action
Tool Inventory
Comprehensive list of all AI tools detected across the organization, whether approved or not:
- Tool name, provider, category
- First detected date
- User count
- Data volume estimate
- Governance status (Approved / Under Review / Blocked / Unknown)
- Link to L6 registry entry (if approved)
Remediation Tracker
Track the governance onboarding process for discovered shadow AI tools:
- Investigate -- assess the tool and its usage
- Decide -- approve, block, or replace with a governed alternative
- Implement -- configure L6 policies, notify users, update documentation
- Verify -- confirm shadow usage has migrated to the governed path
How L7 Connects to L6
L7 and L6 form a closed loop:
- L7 discovers an unauthorized AI tool or model
- The governance team evaluates the tool
- If approved, the tool is added to the L6 Model Registry with appropriate MAP policies
- L6 governs ongoing usage of the now-approved tool
- L7 verifies that shadow usage migrates to the governed channel
- Any remaining shadow usage of the same tool is escalated as a policy violation
L7 Discovery → Governance Review → L6 Registry → L6 MAP Policies → L7 VerificationShadow AI Is an Opportunity
Every shadow AI tool discovered by L7 represents demand for AI capability. Rather than simply blocking tools, use L7 discoveries to understand what your teams need and provide governed alternatives. Shadow AI goes down when governed AI is easy to use.
Operating Modes
| Mode | Behavior |
|---|---|
| Monitor | Shadow AI activity is detected and logged. No user-facing alerts or blocks. Governance team receives reports. |
| Advisory | Detected shadow AI triggers notifications to the user with guidance on approved alternatives. No blocks. |
| Enforce | Blocked tools are actively prevented. Users are redirected to approved alternatives. Security team alerted for persistent violations. |
Enforcement Requires Endpoint Control
L7 enforcement mode requires either the TheWARDN browser extension, endpoint agent, or network-level controls (proxy/firewall). Monitor and Advisory modes work with network-level detection alone.
Related Layers
- L6: Model Governance -- L7 discovers tools; L6 governs them after approval
- L1: Prompt Governance -- prompts sent to shadow AI tools bypass L1 entirely, which is why L7 exists
- L3: Custody & Chain of Evidence -- data sent to shadow AI tools breaks the chain of custody