
Twelve months ago, when your company approved GitHub Copilot or rolled out Claude for Work, the security conversation was mostly about data leakage — what the model might train on, whether prompts were stored. Contained questions with reasonably contained answers.
That conversation is now the wrong conversation.
In the past two quarters, every significant AI tool in the enterprise stack added support for the Model Context Protocol (MCP) — the standard that lets AI agents connect directly to external systems and take actions on them. Claude Code. GitHub Copilot Agent Mode. Cursor. Windsurf. Microsoft 365 Copilot.
What this means in plain terms: the AI tools your employees already have installed on their laptops can now reach your internal databases, call your internal APIs, create tickets, send messages, query customer records, and execute workflows — through a standardised interface that every tool now speaks.
This is genuinely good news for productivity. It's also a meaningful shift in what IT is responsible for. And unlike most governance challenges, this one is already in your environment - not coming soon.
Before MCP, AI tools were largely isolated. They could read what you pasted into them. Integrations with external systems required custom connectors, typically built and maintained by engineering teams. Most employees experienced AI tools as sophisticated text boxes — useful for generating content and explaining code, but not connected to the systems where their actual work lived.
MCP changed the architecture. It's a standard protocol, backed by Anthropic and now adopted across the industry, that gives AI tools a common interface for connecting to external data sources and taking actions. Think of it less like a feature and more like what HTTP did for web browsers — a universal standard that, once widely adopted, unlocked a wave of interoperability.
Here's what that means concretely:
• Before MCP: Connecting Claude to Salesforce required a custom-built Salesforce connector maintained by your engineering team. Repeat for every tool and every system.
• After MCP: Any MCP-compatible AI tool can connect to any system with an MCP server. No custom connector. No engineering ticket. An employee edits a config file and they're connected.
For employees, this is the productivity unlock that AI tool deployments have been promising but not quite delivering. For IT, it changes the surface area you're responsible for — and it already happened, without a procurement decision or a security review.
Here's the inventory that most IT teams don't have a clear picture of yet:
Natively MCP-compatible. Engineers configure MCP servers in their local settings file, and Claude Code can query those systems, execute commands, and take actions as part of its agentic workflows. Available on Team and Enterprise plans — which means it's already in developer hands at most companies that run Anthropic products.
What it can reach with MCP enabled: GitHub repositories, internal databases, Jira, Confluence, internal APIs, Slack, custom tools your team has built MCP servers for.
Shipped to VS Code users recently. Copilot crossed 26 million users and is in 90% of Fortune 100 companies. In Agent Mode, Copilot connects to MCP servers to retrieve context and take actions beyond code generation — querying documentation, filing issues, updating project management tools.
What changed with this update: The product your company licensed for code completion is now an agent that can reach your internal systems. Same license, expanded capability, no new security review triggered.
The AI code editor with the fastest growth in enterprise developer adoption over the past 18 months. Built-in MCP support. Many developers now use Cursor as their primary interface, which makes the MCP servers it can reach a meaningful question for IT.
Similar developer profile to Cursor. Growing rapidly in engineering teams. MCP-native from the ground up.
Microsoft is expanding M365 Copilot's connectivity through MCP. The Microsoft Graph API — which covers Exchange, Teams, SharePoint, and OneDrive data — is being exposed through MCP-compatible interfaces. For Microsoft-heavy enterprises, this is the development to watch.
The common thread: none of these require a new procurement decision. MCP capability arrives through an update to tools employees already trust and use daily.
In a standard application access model, IT knows what applications employees have access to — because access is provisioned through Okta or Entra. The provisioning event creates a record.
MCP connections don't work this way. An employee opens their AI tool's settings file and adds an MCP server URL. There is no provisioning event. No ticket in your ITSM. No notification to IT.
What this looks like in practice: Six weeks after Claude Code rolls out to your engineering team, you have 40 developers running a combined 200+ MCP connections — to internal APIs, production databases, third-party SaaS platforms, and a handful of community-built MCP servers nobody formally evaluated. IT has zero central visibility into any of it. The question "which engineers have Claude Code connected to our production database?" has no fast answer.
Your Okta or Entra setup manages human access to applications. It does not govern what an AI agent does inside those applications.
Here's the gap in concrete terms. Consider a typical GitHub MCP server. It exposes tools including:
• List repositories and branches
• Read file contents
• Create a pull request
• Review a pull request
• Delete a branch
• Push commits directly to a branch
• Push to protected branches (main, release)
• Access repository secrets and environment variables
An employee with senior developer access to GitHub might legitimately have permission for all of these. But when an AI agent inherits those permissions, the question changes: should the agent be able to push directly to main? Access secrets? Delete branches?
Your IAM says the user can access GitHub. It has no mechanism to distinguish between "the agent can create PRs" and "the agent can do anything the user can." That distinction — action-level access control — doesn't exist in your identity infrastructure. It requires a separate layer.
Your SaaS platforms generate audit logs. But those logs were designed to track human actions — they record the user account that made a change, not whether the action was taken by a human or an AI agent working on their behalf.
This creates a real investigation gap. When something goes wrong — an unexpected change in a production system, a data export that shouldn't have happened, an agent call that triggered a downstream error — your ability to reconstruct what happened depends on having a log at the tool level: which tool was called, what parameters were passed, what the system returned, when it happened.
That log doesn't exist in most enterprise environments today, because there was no central point to capture it.
The governance window is shorter than it appears. Here's the trajectory:
• 12 months ago: MCP was primarily a developer curiosity. A handful of early-adopter developers were experimenting with it. Minimal enterprise footprint.
• 6 months ago: Major AI tool vendors shipped MCP support. Engineering teams at early-adopter companies started connecting tools to internal systems at scale.
• Today: MCP is the standard AI tools are converging on. Enterprise SaaS vendors are publishing MCP servers. The number of potential connections is growing faster than any IT team can track manually.
• 6 months from now: MCP connectivity will be assumed. Employees will expect their AI tools to reach internal systems by default. The governance gap that's currently invisible in most enterprises will be visible through incidents.
The organisations that build governance infrastructure during this window will have a defensible posture. The ones that wait will be building it in response to something that already went wrong.
The right response is not to block MCP-capable tools. That blocks the productivity gain and drives shadow usage - employees route around restrictions to their personal machines or use personal accounts. The right response is to get ahead of the infrastructure before the connections multiply past the point of manageable.
Build a central inventory point. All AI agent connections to internal systems should flow through a single gateway. Not to micromanage which tools employees use, but because a central flow point is the only way to know what's connected and what it's doing. A gateway that employees configure their AI tools to point to becomes the inventory automatically - everything flowing through it is logged.
Establish action-level access policy. IT needs the ability to specify not just which applications employees can connect to, but which tools within those applications their agents can call. "Developers have access to GitHub through AI" isn't a policy. The policy is: they can create and review pull requests, list branches, and read file contents - but not push to main, delete branches, or access secrets. That granularity requires infrastructure that operates at the tool level.
Integrate with existing identity systems. The governance model for AI agents should integrate with Okta, Entra, or whatever IDP is already running. Group membership should drive agent access policy. Offboarding events should revoke agent access automatically. The operational pattern should mirror managing application access, because the team doing it is the same team.
Start with engineering. The heaviest MCP adoption right now is in developer tools. Claude Code, Cursor, and GitHub Copilot Agent Mode are where most of the connections are. Starting governance there, before expanding to business tools as MCP reaches sales, finance, and operations, is the practical sequence.
Right now, adoption is heaviest in engineering teams — Claude Code, Cursor, and GitHub Copilot Agent Mode are developer-focused tools. But Microsoft 365 Copilot's MCP expansion will bring this to business users across sales, finance, HR, and operations within the next year. Governance should start with engineering and plan to scale.
In theory. In practice, enforcement is difficult - MCP configuration is a local settings file, and blocking it requires endpoint control that many organisations don't have at that granularity. More importantly, blocking tends to drive shadow usage rather than solving the governance problem. A governed gateway gives IT real control without requiring blanket restrictions that employees route around.
Traditional API governance covers human-initiated integrations with access tokens managed by your IT or engineering team. MCP creates a new category: employee-initiated agent integrations, configured locally, where employees generate their own credentials. The volume, speed, and decentralisation of this is what makes it different - and what makes a central gateway the right architectural response.
Three things: (1) establish a gateway as the required path for new MCP connections, (2) connect it to your IDP so offboarding automatically revokes agent access, (3) configure explicit tool allow-lists for your highest-risk systems (anything with write access to production). That's enough to stop the governance gap from growing while you build out the rest.
A well-built MCP gateway registers MCP servers as connectors - both pre-built integrations for common SaaS platforms and custom connectors built from OpenAPI specs or raw MCP server URLs. The gateway normalises the policy and auth layer across all of them, so IT manages one policy system regardless of how many different vendors are in the mix.