Skip to content

Core Concepts

This page explains the key ideas behind Trust3 AI Governance so you can get oriented before diving into the product. If you are new to Trust3, read this before the Quick Start.


The governance problem

Enterprises run AI agents across multiple platforms — Databricks, Microsoft Copilot Studio, custom applications, and more. Each platform has its own admin console, its own logs, and its own way of describing what agents exist and who can use them.

The result: no one has a complete picture. IT teams piece together spreadsheets. Compliance teams ask questions that take days to answer. New agents go to production without documented owners or approvals. Agents that were registered last quarter have changed scope without anyone noticing.

Trust3 AI Governance closes this gap. It gives your organization one place to see every AI agent, evaluate it against your policies, and act on what you find.


Core concepts

AI asset inventory

The inventory is the central catalog of every AI agent and related asset Trust3 knows about. Each entry records where the agent lives (platform, workspace), who owns it, what it is for, which identities can invoke it, and how it is performing against your policies.

Inventory is built two ways:

  • Automated discovery — Trust3 connects to your platforms (Databricks, Microsoft Copilot Studio) and pulls agent metadata on a regular schedule. New agents appear in the inventory automatically.
  • Manual registration — For platforms without an automated connector, teams register agents through a structured form. GIA assists with filling the form from a plain-language description.

Trust Score

Every agent in the inventory has a Trust Score — a percentage that summarizes how well that agent meets your governance requirements.

Band Score What it means
High 70% and above Agent is well-governed. No immediate action needed.
Medium 40–69% Gaps present. The owner should review.
Low Below 40% Significant gaps. Remediation is needed.

Each agent also has a trust status — Trusted, Unverified, or Untrusted — based on whether it meets your active policy requirements.

The Trust Score is not a fixed label. It moves as the agent's governance state changes: it drops when violations open or required information goes missing, and it recovers when those issues are resolved.

Policies

Policies are the rules Trust3 evaluates every agent against. They encode what "governed" means for your organization — whether agents have assigned owners, whether they use approved models, whether they are documented appropriately for your compliance framework.

Trust3 ships with pre-built policy packs for common frameworks including NERC CIP / Utility, EU AI Act, FERC, and Internal AI Governance. You can also create custom policies manually or by uploading a compliance document and letting GIA extract the rules.

When an agent fails a policy rule, Trust3 opens a violation — recording what failed, the severity, the affected agent, and what remediation looks like. Violations are tracked until they are resolved or formally waived.

Shadow AI

Shadow AI refers to agents operating in your environment without governance oversight — agents that were never registered, or agents using organizational credentials that no one is tracking.

Trust3 surfaces two types of shadow AI that organizations can act on:

  • Agents running on approved infrastructure (Databricks, Copilot Studio) that were never registered in the governance inventory
  • Agents calling model APIs using credentials issued by your organization but not tracked in your governance program

When Trust3 discovers an unregistered agent, it appears in the inventory so your team can evaluate it and decide what to do.

Pre-development approval

The pre-development approval workflow is how new AI agents get documented sign-off before they reach production. A developer submits a registration describing the agent, its purpose, its data scope, and its intended users. The submission goes through a review chain — Legal, then Compliance — before the agent is approved to build.

Every step in the chain is recorded: who submitted, who reviewed, who approved, and when. That record stays attached to the agent's inventory entry permanently and is available in compliance evidence exports.

GIA — Governance Intelligence Agent

GIA is the AI assistant built into Trust3. It works across inventory, policies, and workflows and is available from the header on any page.

You can ask GIA governance questions in plain language and get answers grounded in your actual inventory:

  • "What needs attention now?"
  • "Show me agents without owners"
  • "Which agents have low trust scores on Databricks?"
  • "Why did this agent's trust score drop?"

GIA also assists with registration: describe a new agent in one sentence and GIA fills in the registration form. For policies, GIA can read a compliance document and extract enforceable rules ready for your review.

GIA remembers your conversation as you navigate between pages, so follow-up questions build on prior ones naturally.

Identities and ownership

Every agent should have a documented owner — the person or team accountable for it in production. Trust3 also maps the identities that can invoke each agent (users, service principals, automated accounts) from the platform metadata it discovers.

For short-lived or delegated credentials, Trust3 links them to a parent identity so compliance reviews show who ultimately acted, not just which temporary token appeared in a log.

Audit evidence

Everything Trust3 records — inventory state, policy evaluations, violation history, approval decisions — is available as structured evidence for compliance reviews and audits.

Because evidence is tied to specific discovery runs and snapshots, it is reproducible. You can show an auditor the state of your AI estate at any point in time covered by Trust3, backed by timestamped records rather than one-off screenshots.


How the pieces connect

An agent is discovered (automatically or by registration) and added to the inventory. Trust3 evaluates it against your active policies and assigns a Trust Score. Violations are opened for any gaps and tracked to resolution. If the agent was new, it went through pre-development approval before reaching production. GIA gives your team natural-language access to everything in the inventory so you can ask questions, act on findings, and assist with governance tasks without navigating multiple screens.

The dashboard brings it all together: one view of your entire AI estate, scored, evaluated, and ready to act on.


Next steps

  • Quick Start — connect your first platform and run a scan
  • GIA — learn what GIA can do and how sessions work
  • Policies — understand the policy library and how to create rules
  • Pre-Development Workflow — walk through the agent approval process
  • Trust Score — understand how scores are calculated and what moves them