Skip to content

L7: Shadow AI Detection

L7 discovers unauthorized AI usage across your organization. Shadow AI is any AI tool, model, or service used without organizational awareness, approval, or governance. L7 finds it, classifies the risk, and recommends governance actions.

What Is Shadow AI?

Shadow AI is the AI equivalent of shadow IT -- tools adopted by individuals or teams without going through procurement, security review, or governance approval. Examples include:

  • An employee pasting customer data into ChatGPT in a browser tab
  • A developer using a personal OpenAI API key in production code
  • A marketing team signing up for an AI copywriting tool with a corporate credit card
  • A contractor using an AI code assistant that sends code to an external model
  • An analyst uploading a spreadsheet to an AI data analysis platform

The Scope of the Problem

Industry surveys consistently show that 60-80% of AI tools used in enterprises are unmanaged. These tools process sensitive data under consumer terms of service, with no audit trail, no data handling agreements, and no organizational visibility.

Capabilities

Network Scanning for AI API Calls

L7 monitors network traffic for outbound connections to known AI service endpoints:

ProviderDetected Endpoints
OpenAIapi.openai.com, chat.openai.com
Anthropicapi.anthropic.com, claude.ai
Googlegenerativelanguage.googleapis.com, gemini.google.com
Mistralapi.mistral.ai, chat.mistral.ai
Cohereapi.cohere.ai
Hugging Faceapi-inference.huggingface.co
Replicateapi.replicate.com
Custom/Self-hostedConfigurable endpoint patterns

Traffic analysis identifies:

  • Which AI services are being contacted
  • Volume and frequency of requests
  • Data payload sizes (indicating potential sensitive data transfer)
  • Source users and departments

Browser Extension Monitoring

The TheWARDN browser extension (Chrome/Edge, MV3) provides visibility into browser-based AI usage:

  • Detects navigation to AI web applications
  • Monitors clipboard activity when AI tools are in focus
  • Identifies browser extensions that interact with AI services
  • Tracks copy/paste of sensitive data into AI interfaces
Shadow AI Detection:
  Source:     Browser (Chrome)
  User:       marketing/sarah
  Tool:       jasper.ai
  Activity:   Text input (paste from clipboard)
  Data Size:  4,200 characters
  Risk:       MEDIUM -- tool not in approved registry
  
  Recommended Action: Review jasper.ai for governance onboarding

Browser Extension Is Opt-In Visibility

The browser extension provides visibility without blocking. It reports what AI tools are being used so governance teams can make informed decisions about which tools to approve, restrict, or replace with governed alternatives.

Desktop Agent Discovery

L7 discovers AI-powered desktop applications and agents running on managed endpoints:

  • AI code assistants (GitHub Copilot, Cursor, Cody, Continue)
  • AI writing tools (Grammarly AI, Notion AI, Otter.ai)
  • AI image generators (Midjourney, DALL-E desktop clients)
  • Local LLM runners (Ollama, LM Studio, llama.cpp)
  • Custom AI agents and automation tools

Discovery is performed through:

  • Process enumeration on managed devices
  • Application inventory integration (SCCM, Intune, Jamf)
  • Network connection correlation with known AI endpoints

Unauthorized Tool Detection

L7 maintains a continuously updated database of AI tools and services. When a new tool is detected, it is classified against the organization's governance policies:

ClassificationMeaningAction
ApprovedTool is in the governance registry with active policiesNo action needed
Known - UnapprovedTool is recognized but not yet approvedFlag for governance review
UnknownTool is not in the databaseFlag as high priority for investigation
BlockedTool has been explicitly prohibitedAlert security team

Risk Classification

When L7 detects shadow AI usage, it assigns a risk level based on multiple factors:

Risk Assessment: shadow_event_20260410_0847
  Tool:             copy.ai
  Data Sensitivity:  Unable to determine (no DLP integration)
  Agreement Status:  No DPA on file
  User Count:        3 users detected
  Data Volume:       ~12,000 tokens/day
  Regulatory Impact: MEDIUM (marketing content, no PII detected)
  
  Overall Risk:      MEDIUM
  
  Recommendations:
    1. Contact copy.ai for enterprise agreement and DPA
    2. Evaluate as approved tool under MODEL_ALLOWLIST
    3. If approved, configure L6 MAP policies for ongoing governance
    4. If denied, notify affected users and provide approved alternative

Risk factors include:

  • Data sensitivity -- is sensitive data being sent to the tool?
  • Agreement status -- does the organization have a DPA/BAA with the provider?
  • User count -- how widespread is adoption?
  • Data volume -- how much data is being processed?
  • Regulatory exposure -- does usage implicate specific regulations?
  • Tool reputation -- is the tool from a known, reputable provider?

Console Features

Shadow AI Dashboard

High-level view of unauthorized AI activity:

  • Total shadow tools detected -- unique tools found outside governance
  • Active shadow users -- number of users with shadow AI activity
  • Risk distribution -- breakdown by risk level (Critical / High / Medium / Low)
  • Trend chart -- shadow AI activity over time (ideally trending down as tools are governed)
  • New detections -- tools detected for the first time in the current period

Detection Feed

Real-time feed of shadow AI events:

  • Tool name and provider
  • User and department
  • Activity type (API call, web app, desktop agent)
  • Data volume estimate
  • Risk classification
  • Recommended action

Tool Inventory

Comprehensive list of all AI tools detected across the organization, whether approved or not:

  • Tool name, provider, category
  • First detected date
  • User count
  • Data volume estimate
  • Governance status (Approved / Under Review / Blocked / Unknown)
  • Link to L6 registry entry (if approved)

Remediation Tracker

Track the governance onboarding process for discovered shadow AI tools:

  • Investigate -- assess the tool and its usage
  • Decide -- approve, block, or replace with a governed alternative
  • Implement -- configure L6 policies, notify users, update documentation
  • Verify -- confirm shadow usage has migrated to the governed path

How L7 Connects to L6

L7 and L6 form a closed loop:

  1. L7 discovers an unauthorized AI tool or model
  2. The governance team evaluates the tool
  3. If approved, the tool is added to the L6 Model Registry with appropriate MAP policies
  4. L6 governs ongoing usage of the now-approved tool
  5. L7 verifies that shadow usage migrates to the governed channel
  6. Any remaining shadow usage of the same tool is escalated as a policy violation
L7 Discovery → Governance Review → L6 Registry → L6 MAP Policies → L7 Verification

Shadow AI Is an Opportunity

Every shadow AI tool discovered by L7 represents demand for AI capability. Rather than simply blocking tools, use L7 discoveries to understand what your teams need and provide governed alternatives. Shadow AI goes down when governed AI is easy to use.

Operating Modes

ModeBehavior
MonitorShadow AI activity is detected and logged. No user-facing alerts or blocks. Governance team receives reports.
AdvisoryDetected shadow AI triggers notifications to the user with guidance on approved alternatives. No blocks.
EnforceBlocked tools are actively prevented. Users are redirected to approved alternatives. Security team alerted for persistent violations.

Enforcement Requires Endpoint Control

L7 enforcement mode requires either the TheWARDN browser extension, endpoint agent, or network-level controls (proxy/firewall). Monitor and Advisory modes work with network-level detection alone.

AI Governance for Every Organization