Security

Illuminate AI Adoption with AIBOMS

AI rarely arrives as a single platform. It appears inside tools already in use: a chat feature in a CRM, a meeting‑summary plug‑in, an agent that files tickets, or a few SDK pilots run by engineering teams. Each may touch sensitive data and send it elsewhere. When it isn’t clear what these AI tools handle, where data goes, and under what terms, visibility and risk assessment suffer.

Jackson Harrower
Chief of Staff at Riscosity
Published on
9/2/2025
5
min.

An AI Bill of Materials (AIBOM) addresses this gap. It is a concise, living profile for every AI capability an organization can invoke—models, agents, SaaS features, plug‑ins, and APIs. Kept in a machine‑readable format, it serves as a practical record that can inform runtime decisions in a control plane.

An AIBOM summarizes five things about each AI capability: who provides it, what it can do, what data it sees, where it runs, and how it should be treated. If software teams use SBOMs to map libraries and licenses, AIBOMs map models, endpoints, and data behavior for governance and policy.

The 10 core fields to start with (Minimum Viable AIBOM)

A compact schema is enough to be useful on day one:

  1. Provider & Product – vendor and product name.
  2. Model/Capability – the function in use (for example: text generation, RAG, or agent).
  3. Endpoint – API URL or SDK package.
  4. Region/Residency – where processing and storage occur.
  5. Data Classes – PII, PHI, PCI, source code, secrets.
  6. Retention & Training Use – logging level, retention period, and whether data is used to improve models.
  7. Auth & Secrets – method, storage location, and rotation.
  8. Actions/Permissions – allowed actions such as email send, ticket write, or HTTP calls.
  9. Downstream Dependencies – plug‑ins or APIs the capability can invoke.
  10. Policy Tier – Approved, Conditional, or Blocked, with an owner and review date.

These fields support practical policy decisions and make it easier to explain governance to leadership.

How this can play out

Three ordinary moments illustrate how AIBOM data can drive outcomes:

  • Cleaning customer data in a chatbot. A control plane can look up the AIBOM entry for the provider, see “PII + retains logs,” and apply redaction so identifiers do not leave the boundary.
  • An agent making an HTTP call. If the AIBOM marks this action as risky, the policy may allow the call only when the destination appears on an allowlist; otherwise an exception process can open.
  • Uploads from an EU office. If data carries an EU‑resident tag and the AIBOM lists an EU endpoint, the request can be routed in‑region. If no compliant route exists, the call can be denied with a clear message.

In each case, controls do not rely on individual memory; context from the catalog guides the decision.

Where it lives (and how it stays fresh)

The AIBOM can live in a small catalog in a CMDB or a dedicated repository. Common feeders include: passive discovery of AI traffic and SaaS features, procurement and TPRM intake mapped to AIBOM fields, and a short developer registration for SDKs or agents. Given the pace of change, teams often monitor for drift such as new endpoints or regions, changes to data‑use settings, or SDK upgrades that warrant review.

Example 30‑day starter plan

Week 1: Definition. Many teams start by adopting a core schema and mapping procurement/TPRM forms to those fields, assigning an owner per record and setting a review cadence.

Week 2: Population. Enable discovery and populate the top ten AI destinations by usage and sensitivity. Breadth typically beats depth at the start.

Week 3: Application. Draft a few baseline policies that read AIBOM fields and run in monitor mode to calibrate.

Week 4: Enforcement and measurement. Move to enforcement at the egress points already in place. Send telemetry to the SIEM, time‑box exceptions, and report coverage and violations to the risk committee.

Starter controls (reference enforcement rules)

These examples show how AIBOM fields drive runtime decisions. Tune locally; start in monitor mode and move to enforce once calibrated. Keep them portable and human‑readable so audit and engineering share the same view.

 
# Starter controls driven by AIBOM fields
# Mode starts in "monitor" to calibrate, then moves to "enforce".

- name: Secrets never leave
  why: Prevent credential/token exfiltration
  match: { data_classes: [secrets] }
  action: block
  mode: enforce
  owner: security
  review_in_days: 90

- name: PII allowed only with redaction for conditional vendors
  why: Reduce privacy exposure while preserving utility
  match: { data_classes: [pii], aibom.policy.tier: conditional }
  action: redact
  redact: [names, emails, phones]
  mode: monitor # flip to enforce after 7-14 days of tuning
  owner: privacy
  review_in_days: 60

- name: EU-resident data stays in-region
  why: Enforce sovereignty and residency commitments
  match: { data_tag: EU_resident }
  action: route
  route.to: eu_approved_models
  on_no_compliant_route: deny_with_message
  mode: enforce
  owner: compliance
  review_in_days: 180

- name: Agents must respect the HTTP allowlist
  why: Limit agent blast radius and data leakage via plugins/APIs
  match: { aibom.capabilities.actions: [http_call] }
  action: allow_if
  allow_if.in_allowlist: true
  on_violation: log_and_open_exception
  mode: monitor
  owner: appsec
  review_in_days: 30
 

Common pitfalls

  • Cataloging everything on day one can slow momentum. Starting minimally often helps.
  • Vendor attestations are informative, yet direct observation of what flows is more reliable.
  • Plug‑ins and downstream APIs expand the blast radius; policies should follow the chain.
  • Binary allow/deny policies can encourage bypass; conditional tiers help maintain productivity while risk decreases.

Where Riscosity fits in

Riscosity automatically creates a catalog of your AI vendors and leverages the AIBOM fields to redact, re-route, or block data in motion across models, agents, and SaaS features. 

The end result is safer AI adoption with minimal operational overhead.

Final Thoughts 

AIBOMs are a great first step towards formalizing visibility over organizations’ adoption of AI tools and will be an essential part of evolving AI regulations. 

Keep the schema lean and the catalog live. An AIBOM is most effective when connected to runtime controls and refreshed as services change.