
Your identity infrastructure is solid. Okta or Microsoft Entra handles authentication. Users log in through SSO. Access to applications is tied to group membership, automatically provisioned and deprovisioned as people join and move teams. You've got MFA. You've got conditional access policies. You've got a reasonably clean answer to "who can access what."
That infrastructure was built for a specific model of access: a human, with a device, authenticating to an application. It's genuinely good at that.
But it was never designed for the model that's now arriving: an AI agent, acting on behalf of that human, doing things inside those applications at machine speed — without the human watching each action. And that gap is wider than most IT teams have had time to fully think through.
Here's a precise description of what your IAM system does when an employee's AI agent starts working.
The employee authenticates. Okta confirms who they are, checks which applications they're authorised to access, and issues the appropriate tokens. The agent inherits those credentials and uses them to connect to whichever systems it needs.
That's where Okta's reach ends.
Once the agent is operating inside Salesforce, or GitHub, or your internal ticketing system, the actions it can take are governed by that application's own permission model — not by your IAM. Okta knows the employee has access to Salesforce. It has no mechanism to distinguish between an agent reading a single customer record and an agent bulk-exporting every record in the database. Those operations look identical from the identity layer's perspective.
For human access, this is a manageable limitation. A sales rep with read-and-update access to customer records wouldn't accidentally bulk-export the entire CRM. There's a human in the loop making each decision. Self-limiting behaviour keeps the blast radius of any individual action small.
AI agents don't have that self-limiting behaviour. They execute workflows. They take the path that completes the task. An agent preparing a customer report might query far more records than the task required — not maliciously, not because of a configuration error, but because "gather relevant context" has a different scope when a machine executes it versus when a human does.
Recommended Reading: Implement OAuth for MCP.
Consider a senior developer at your company. They've been given GitHub access that reflects their role: they can create and review pull requests, manage branches, access repository settings, push commits, and view repository secrets — because they're a senior developer who manages infrastructure. Okta knows they have "access to GitHub." That's the level of granularity the identity layer operates at.
Now they connect Claude Code to the GitHub MCP server. Claude Code inherits the developer's credentials. The GitHub MCP server exposes the following tools:
• list_repositories — lists all repos the user has access to
• get_file_contents — reads any file in any repo
• create_branch — creates a new branch
• create_pull_request — opens a pull request
• delete_branch — deletes a branch
• push_to_branch — pushes commits, including to protected branches
• get_repository_secrets — reads configured secrets
• update_webhook — modifies repository webhooks
When the developer asks Claude Code to "help me review and clean up the branch structure in our main repositories," the agent has every tool in that list available to it. It can delete branches — including ones it shouldn't. It can access secrets — which the developer could do manually but wouldn't in this context. It can push directly to main if that serves the cleanup task.
The developer's intent was code review assistance. The agent's available capability is full repository control. The IAM layer said "this user has access to GitHub." It said nothing about what the agent should be allowed to do inside GitHub.
That gap — between application access and action access — is the new access control problem.
Not all MCP-connected systems are equal. The gap is most consequential where the combination of action breadth and data sensitivity is highest:Agent capabilities often include: read records, update records, delete records, bulk export, run reports across the full customer database, merge records, send emails. The governance question: should agents be able to bulk export? Delete records? Access accounts outside an employee's assigned territory?Agent capabilities often include: read code, create branches, merge PRs, push to any branch, delete branches, access secrets, modify CI/CD configuration. The governance question: should agents be able to push to protected branches? Access secrets? Modify pipeline configs?Agent capabilities often include: read any table, run arbitrary queries, write records, delete records, execute stored procedures. The governance question: should agents be able to query tables with PII? Run unrestricted bulk queries? Write or delete records?Agent capabilities often include: read messages (including private channels), post messages, create or delete content, access message history. The governance question: should agents be able to read all channels, including private ones? Post on behalf of the user?In each case, the IAM layer addresses application access. The action level is unaddressed.
The control that fills this gap lives in the MCP layer — specifically, in a gateway that sits between employees' AI tools and the systems they connect to.
At the application level, it looks familiar: this team can use Salesforce through their AI tool, that team can use GitHub, the finance group can access the ERP. Recognisable territory, just applied to agent access rather than human access.
The materially different capability is the action level. In practice, this means explicit
Allow-lists per team, per system:
- Create pull request
- Review pull request and leave comments
- List branches
- Read file contents
- Read account records (own territory only)
- Update account and contact records
- Create and update opportunities
Block-lists per team, per system
- Delete branch (blocked)
- Push to main or release branches (blocked)
- Access repository secrets (blocked)
- Modify webhooks (blocked)
- Bulk export (blocked)
- Delete records (blocked)
- Access accounts outside assigned region (blocked)
- Run full-database queries (blocked)
These aren't hypothetical distinctions. They're the difference between an agent being a useful collaborator and an agent having the ability to do things that should require a human decision.
The practical concern with action-level access control is that it creates a lot of configuration surface — and if IT has to manage every tool-level permission decision for every team, it becomes a bottleneck.
The model that works at scale: IT defines the outer boundary; team managers own the inner configuration.
How it plays out:
1. IT decides which applications are available through AI tools at all: GitHub, yes; production database direct access, no.
2. IT sets the outer boundary for each application: for GitHub, agents can read and create but never delete or push to main, regardless of team.
3. Within that boundary, team managers configure tool-level access for their domain. The engineering lead decides which GitHub tools developers can actually use. The sales ops manager configures what Salesforce tools the sales team's agents can call.
4. IT sees everything: every policy, every delegation decision, every tool call in the audit log. They're not making every decision. They have full visibility.
This mirrors the delegation model that works well for application access governance, policy ownership where the context lives, with IT maintaining oversight. The difference is that it operates at the action level rather than the application level.
Most enterprises with MCP in the stack have a consistent pattern: application-level access roughly handled, action-level access essentially unaddressed.Developers are connecting to systems and agents are inheriting full permission sets for whatever tools those systems expose. There are no explicit allow-lists. There are no per-action policies. The assumption — that employees won't misuse access, that agents will behave like cautious humans — is doing governance work that governance infrastructure should be doing.That assumption is mostly correct. Most of the time, agents do something reasonable. But "mostly correct, most of the time" isn't a compliance posture. It isn't a defensible answer when something unexpected happens and someone asks: what controls did IT have in place to govern what AI agents could do?The IAM layer you have is good at what it was built to do. Building the layer that covers what it wasn't built for is the governance work that's actually on IT's desk right now.
You can reduce the blast radius by narrowing what users can do inside applications. But that typically means restricting what humans can do — which has real operational costs. The better answer is to give humans their appropriate access and separately govern what agents can do with that access, through a layer that operates at the action level without affecting human workflows.
No. An MCP gateway is a complementary layer, not a replacement. Okta or Entra continues to manage human identity and application access — the MCP gateway extends that governance to cover what agents do inside those applications. The two systems integrate: group membership in your IDP drives access policy in the gateway, and offboarding events propagate from your IDP to gateway access revocation automatically.
Start with a principle-of-least-privilege approach: what does each team actually need agents to do, not what could they potentially do? For most teams, the list of genuinely useful agent actions is much narrower than the full tool set the MCP server exposes. Engineering teams need PR creation and code review tools. Sales teams need record read and update. Start narrow and expand as teams demonstrate the need for additional tools.
Two things: the call is blocked (the agent doesn't execute it), and the attempt is logged as a security event. This is important — blocked attempts should surface to IT, not fail silently. A pattern of attempts to call tools that are consistently blocked may indicate a misconfigured agent, an unexpected workflow, or an employee whose allowed tool set needs to be revisited.
Automated agents (not tied to a specific user session, running scheduled workflows) need the same action-level governance as user-initiated agents — sometimes more restrictive, because there's no human oversight of what they're doing during a run. The MCP gateway should support service account-style principals with their own allow-list configurations, separate from user-delegated access.