Why authentication and authorisation are not enough when AI agents access enterprise data across channels.
Your AI agent has access to the ticket system, the wiki, the CRM, and probably the HR platform. It communicates through a team chat — public channels, private channels, direct messages. Different audiences, different confidentiality expectations.
Now someone asks in the public channel: "How's the project going?"
The agent delivers a helpful, comprehensive summary. Revenue numbers from the finance system. A personnel change from HR. A critical security vulnerability from the engineering backlog. Not because it was hacked. Not because of a prompt injection. Because it was trying to be helpful — and helpful, to an LLM, means relevant.
This is not a model problem. This is an architecture problem. And it has a name.
The Three-Layer Gap
Enterprise security for AI agents typically covers two layers:
Authentication answers "Who is the agent?" — solved through OAuth2, service accounts, API keys. Authorisation answers "What may the agent access?" — solved through IAM, RBAC, ABAC, whatever your organisation uses.
But there is a third layer that almost nobody builds for: Dissemination Control — "What may the agent share, with whom, in which context?"
A human employee with read access to HR data and project data does not post salary information in the public team channel. Not because IAM prevents it — they have the access. But because they understand that a public channel is not the place for confidential HR data. They understand social context.
AI agents do not understand social context. They optimise for relevance. And the most relevant answer to a harmless question is often the one containing the most sensitive information.
Why This Is Getting Worse, Not Better
Three trends are converging right now.
MCP (Model Context Protocol) is becoming the standard for tool integration. Agents are no longer limited to generating text. They call APIs, query databases, create tickets, update documents. The number of systems an agent can reach is growing with every new MCP server that gets published.
Agents are being deployed in multi-user, multi-channel environments. The same agent serves the public company channel, the engineering team channel, and private direct messages. Each has a different audience and different confidentiality expectations. The agent does not differentiate.
Off-the-shelf MCP servers use a single API token with full access. No per-user permissions. No channel awareness. No dissemination policy. Every query runs with the same god-mode credentials. This is fine for a single developer using an agent locally. It is not fine when that agent serves an entire organisation.
The guardrail providers — Lakera, Prompt Armor, and others — address a different problem. They protect the model: input validation, prompt injection detection, output toxicity filtering. Important work, but it does not answer the infrastructure question: How does the agent fit into your existing identity, permission, and compliance landscape?
What Dissemination Control Actually Means
Dissemination Control is the governance layer that determines what information an AI agent may reveal, to whom, and through which channel — independent of whether the agent technically can access the data.
It is not a product. It is an architectural pattern. And it builds on infrastructure most enterprises already have: identity providers, policy engines, permission models in target systems.
The architecture I have been building and testing defines five governance layers and four adoption tiers. The full blueprint is published as an open reference document. Here, I want to focus on the key insight that makes it practical:
You do not need to solve everything at once.
Four Tiers, Each Adding One Capability
The most common objection I hear is: "This sounds like a massive project." It does not have to be. The architecture is designed for incremental adoption. Each tier adds exactly one governance capability to the previous one.
Tier 1: Tool Containment. The agent can only see tools that are explicitly whitelisted for the current channel. If the HR system is not on the whitelist for the engineering channel, the agent does not know it exists. No request means no leak, no error message, no side channel revealing that the system is there.
This is the single most effective measure against information leakage. And it requires no IAM changes, no identity provider integration, no token exchange. Just policy configuration. A policy engine like OPA evaluates "which tools are allowed for this channel" and the orchestration service removes everything else from the LLM request. Days to implement. Not weeks. Not months.
Tier 2: Identity Delegation. The agent acts with the requesting user's permissions, not a service account. When Anna asks about sales pipeline, the ticket system returns Anna's data. When Paul asks the same question, the ticket system returns Paul's data — or nothing, if Paul does not have access. Same channel, same question, different user, different answer.
This requires an identity provider with token exchange support (Keycloak, Entra ID, Okta all support this) and a custom MCP server that performs the delegation. More engineering than Tier 1, but it builds on existing IAM infrastructure — no new permission model needed.
Tier 3: Dissemination Policy. Even when a user has access to data and the tool is enabled, the policy engine evaluates whether the data's classification is appropriate for the channel's context. Confidential data stays out of public channels, even when the requesting user is authorised to see it. Same user, same permissions, different channel, different answer.
Tier 4: Compliance Monitoring. Asynchronous audit and semantic output analysis for high-security environments. Every tool call, every policy decision, every denial is logged. Optional: a separate LLM reviews agent responses for policy violations that rule-based layers cannot catch.
The key message for decision-makers: Tier 1 is what you can implement this week. It eliminates the most dangerous class of information disclosure — cross-domain leakage — without touching your IAM infrastructure. Tiers 2 through 4 add progressively finer control as your requirements demand.
Default Deny, Not Default Permit
One design principle deserves emphasis because it is the most common mistake I see in enterprise AI deployments.
Most organisations approach agent security with a blacklist mindset: the agent can do everything, and we block specific things that are dangerous. This fails for the same reason blacklists always fail — you must enumerate every dangerous combination, and you must never forget one. When a new system is connected, it is automatically accessible until someone remembers to block it.
Dissemination Control uses the inverse: default deny. No tool, no system, no data source is available until it is explicitly enabled for a specific context. A new system connected to the agent is automatically invisible in every channel until someone deliberately whitelists it for the channels that need it.
This means the failure mode is safe. A misconfiguration results in the agent having less access than intended, not more. In security architecture, this is not a nice-to-have. It is the only defensible approach.
The Enforcement Problem
A reasonable question at this point: "Why not just tell the agent in the system prompt which data to share where?"
Because prompts are suggestions, not enforcement. An LLM that is instructed "do not share HR data in public channels" will mostly comply — until a cleverly worded question, a long conversation that dilutes the instruction, or a prompt injection overrides it. Prompt-based restrictions are a behavioural layer, not a security boundary.
In the architecture I have built, enforcement happens at the infrastructure level. Tools are removed from the LLM request before it is sent — the agent cannot call what it cannot see. Token exchange happens at the MCP server — the agent queries with the user's permissions, not its own. Policy evaluation happens deterministically in a policy engine — no LLM in the access control path.
The prompt layer still exists — it controls how the agent behaves within its permitted scope (tone, verbosity, level of detail per channel). But it is not the security boundary. It is the courtesy layer on top of the enforcement layer.
Slug-Level Enforcement
One technical detail that matters more than it seems: how tool identifiers are bound to target systems.
If the tool identifier is a string parameter that the LLM passes ("call system: hr-system"), then an attacker who manipulates the agent's output can redirect the call. Prompt injection could change "hr-system" to "finance-system" in the tool call parameters.
In this architecture, tool identifiers are embedded in MCP server URL paths — the target system is determined by the URL route, not by a parameter the LLM controls. An attacker who manipulates the LLM's output can change query parameters, but cannot change which endpoint the request is routed to. This is a small architectural decision with significant security implications.
What This Does Not Solve
No architecture eliminates all risk. Two limitations deserve explicit acknowledgement.
The mosaic problem. Individual pieces of non-sensitive information can, in combination, yield confidential conclusions. The agent might not have access to HR data, but if it knows project timelines, team changes, and budget adjustments, a skilled questioner could piece together that a layoff is coming. Task-scoped access limits the data available for such inferences, but does not eliminate the possibility.
Prompt-based circumvention at the behavioural layer. Tool containment and identity delegation are enforced at the infrastructure level and cannot be bypassed through prompts. But the behavioural steering layer (how the agent formulates responses within its permitted scope) relies on prompt compliance, which is probabilistic. A determined attacker with access to the channel could potentially extract more information than the prompt policy intends. This is why behavioural steering is a governance layer, not a security boundary.
Both are documented residual risks. The architecture provides the same level of protection as a competent employee who receives the right files for the right task. For most enterprise contexts, this is a defensible risk posture.
The Blueprint
The full architecture — five governance layers, four adoption tiers, component architecture, request flows, defence-in-depth analysis, design decisions, and a reference implementation — is published as an open document under CC BY-SA 4.0:
→ Dissemination Control for AI Agents — Architecture Blueprint
It is vendor-neutral. The reference implementation uses Mattermost, Keycloak, OPA, and Spring Boot, but every component role can be fulfilled by equivalent products — Slack, Teams, Entra ID, Okta, Jira, Confluence, whatever your organisation already runs.
If you are deploying AI agents with tool access in multi-user environments, this is the gap between your IAM and your agent that is not going to close by itself.
Andre Jahn is a Solution Architect specialising in AI agent governance. He works with enterprises to ensure their AI agents operate within existing governance structures — not around them. More at jahnconsulting.io.