Security

The Strengths and Shortcomings of AI Control Tower

As enterprises move from “AI experiments” to “AI everywhere,” most governance programs run into the same problem: the AI footprint is broader than anyone’s inventory. Teams may be using sanctioned foundation-model platforms, embedded AI features inside SaaS tools, browser-based copilots, and internal applications calling models directly. Each use case comes with different data pathways, controls, and owners.

Anirban Banerjee
Dr. Anirban Banerjee is the CEO and Co-founder of Riscosity
Published on
1/26/2026
5
min.

This is why platforms like ServiceNow AI Control Tower are showing up in governance roadmaps. Control Tower helps organizations standardize how AI systems are requested, reviewed, cataloged, and managed across their lifecycle. It can bring order to chaos.

But there’s a second, equally important reality: the strongest governance workflow in the world can’t govern what it can’t see. If you can’t reliably discover unknown AI usage and unknown data flows to models and AI-enabled tools, governance becomes “best effort”, and risk concentrates in the blind spots.

This primer explains what Control Tower is, where it shines, where it falls short for monitoring real AI use, and why pairing it with discovery of unknown AI data flows can turn governance from a static catalog into a living control system.

What is AI Control Tower in plain terms?

AI Control Tower is best thought of as a governance operating system for AI:

  • A central place to register AI systems (models, agents, skills, prompts, datasets, and the business workflows they support)
  • Workflow-based intake to ensure new AI initiatives follow a consistent path
  • Risk, compliance, and accountability steps (assessments, approvals, control checks, audit artifacts)
  • Ongoing management across the AI lifecycle (changes, re-validation, offboarding)
  • Reporting to help leadership understand what’s deployed and how it aligns to policy and value

AI Control Tower helps you move from “AI sprawl” to “AI governance with structure.”

What ServiceNow AI Control Tower is designed to do well

Control Tower’s biggest strength is that it’s oriented around enterprise processes, not just technical telemetry. That matters because AI risk is rarely purely technical. Ownership, accountability, and change management are often the root cause of failures.

Here are the areas where ServiceNow AI Control Tower is typically strongest:

1) A system of record for AI assets

Most companies have no shared source of truth for “what AI exists here.” Control Tower addresses that by providing a structured inventory model and a way to link AI assets to business services, owners, and governance documentation.

Why this matters:

  • You can’t manage AI like a one-off project.
  • A durable AI inventory becomes the basis for policy enforcement, audit readiness, and operational ownership.

2) Standardized intake and lifecycle governance

Control Tower is built for repeatability: intake → review → approvals → deployment → ongoing oversight → retirement.

Why this matters:

  • You get consistency across teams that otherwise invent their own process.
  • It reduces the “governance tax” by embedding controls into a familiar workflow system rather than relying on spreadsheets and email threads.

3) Built-in connective tissue to enterprise governance work

In many organizations, AI governance touches multiple functions: risk, compliance, privacy, security, procurement, and IT operations. Control Tower’s workflow approach can align these stakeholders around a common artifact trail.

Why this matters:

  • Governance becomes auditable and operationally trackable.
  • You’re not trying to reconstruct who approved what, when, and under which policy.

4) Useful when AI is already centralized

If your AI strategy runs through a handful of sanctioned platforms and teams, Control Tower can work well. When AI deployments are “known,” the platform can help:

  • keep approvals consistent
  • track changes
  • maintain ownership and documentation
  • report progress and coverage

Where Control Tower tends to fall short for “effectively monitoring AI use”

Control Tower excels at governing AI assets you already know about or can discover from specific connected sources. It is not, by itself, a universal discovery layer for unknown AI usage or unknown data flows.

That distinction matters because the riskiest AI usage is often:

  • distributed across departments
  • embedded inside third-party tools
  • adopted quickly without governance pathways
  • happening in user-driven channels (browsers, plugins, desktop apps)
  • occurring through direct API calls from internal apps

Here are the common gaps that show up in practice.

1) Inventory and governance workflows are not the same as “finding AI in the wild”

A governance platform can maintain a perfect inventory and still miss real AI usage if the inventory depends on:

  • manual declaration (people filling out intake forms)
  • partial integrations (coverage limited to specific platforms)
  • sporadic discovery (e.g., only in certain cloud AI environments)

If you’re trying to answer questions like:

  • “Which AI tools are employees actually using this month?”
  • “Which teams are sending sensitive data to external models?”
  • “Where are AI features turned on inside SaaS apps?”
  • “Which internal services are calling model APIs, and with what data types?”

…a workflow-based governance layer doesn’t generate those answers on its own.

2) “Monitoring” often means “tracking what’s registered,” not “observing real data flows”

It’s helpful to track adoption and usage metrics for known AI agents and approved systems. But that’s a different category than monitoring unknown data movement—for example:

  • customer data embedded in prompts
  • regulated data being summarized or transformed by AI features inside SaaS
  • credentials or internal code being pasted into tools
  • files being uploaded to AI-enabled services

When governance systems don’t have direct visibility into these flows, the organization ends up managing AI risk based on intent (“we approved this”) rather than reality (“this is what’s actually happening”).

3) Shadow AI and “embedded AI” create governance blind spots

Even companies with strong policies face the same pattern:

  • An employee uses a new AI feature inside an existing tool (CRM, ticketing, BI, marketing, design, collaboration).
  • No one thinks of it as a “new AI system,” so it never goes through intake.
  • Data begins flowing to model endpoints through that feature.
  • Governance only learns of it months later, after an incident, audit, or vendor review.

4) Model risk isn’t only “what model,” but “what data, what route, what control”

Traditional AI governance discussions focus on what model is used and whether the vendor is approved. In practice, risk is often in the specifics:

  • which data types are going to the model
  • whether prompts include customer identifiers, contracts, or source code
  • whether the pathway is a browser session, API integration, plug-in, or embedded feature
  • whether there’s retention, training, or logging on the vendor side
  • whether access controls reflect least privilege
  • whether outputs are entering regulated workflows

Control Tower can manage the paperwork and approvals around these questions, but it can’t validate the actual flows without additional telemetry.

Layering in data flow discovery and control

If your organization wants to mature AI governance beyond policy documents and self-attestation, pairing a control-tower approach with discovery offers three practical benefits.

1) You can close the “unknown unknowns” gap

When a new AI-enabled feature rolls out in a tool your company already uses, discovery enables you to catch it early, and governance becomes proactive instead of reactive.

2) Your inventory becomes a living system, not a quarterly exercise

Without discovery, inventories drift. They become stale as usage changes and teams adopt new tools. With discovery, the inventory can be refreshed continuously, and exceptions can be routed back into intake.

3) Risk conversations become evidence-based

Instead of debating hypotheticals (“we don’t think anyone is using that”), you can make decisions grounded in actual usage patterns and data movement.

Practical takeaways for AI governance leaders

If you’re evaluating AI Control Tower (or any comparable governance platform), these questions help clarify what you’re actually buying and what you may still need:

  1. How does it discover AI usage?
    Is discovery connector-limited? Does it rely on manual intake? Does it cover embedded AI features and shadow usage?
  2. Can it detect unknown data flows to AI endpoints?
    If not, what tool will provide that visibility?
  3. What happens when something new is discovered?
    Can it automatically create an intake record, trigger an assessment, and assign owners?
  4. How do you prevent inventory drift?
    Is there a continuous refresh mechanism tied to observed reality?
  5. What’s your definition of “monitoring”?
    Are you monitoring registered assets or monitoring actual usage and data movement?

ServiceNow AI Control Tower can be a strong foundation for AI governance because it formalizes intake, ownership, compliance activities, and lifecycle management. Its limitation is also common to many TPRM and governance platforms: they are built to manage what’s declared and connected, not to independently discover all AI usage and all AI data flows across an organization. But when paired with the right discovery and data flow control platform, they become powerful assets for enabling the safe adoption of AI across the org.