Remember when APIs were the backbone of everything? They held the core business logic, acted as middleware for databases and file systems, and were the connective tissue of modern applications. We spent years perfecting them: stabilizing interfaces, versioning carefully, and building robust auth systems to protect them.
But the landscape has shifted dramatically with AI applications. MCP provides a standardized way for applications to:
In the early days, most MCP implementations were built as local servers, communicating via the STDIO protocol. Local MCP servers rely on users having the latest packages whenever the server team updates or upgrades their MCP server capabilities.
As AI-powered applications evolved, teams are extending MCP servers beyond local networks to support distributed agent-based systems. The introduction of remote HTTP based MCP servers opened up several possibilities, including actions in third-party APIs, automated workflows etc.
This transition, however, introduces significant security challenges. Unlike their predecessors operating within protected network boundaries, remote MCP Servers are exposed to potential threats across networks. This exposure demands robust security measures, particularly around authentication and authorization.
MCP servers are becoming the de facto for AI agent workflows. But there’s one glaring problem:
Almost all MCP servers are being shipped completely naked.
No auth. No identity layer. No idea who's calling them, what they're allowed to do, or how long they should be able to do it for.
If your MCP server is callable from an AI agent or a remote workflow…and there’s no authorization layer in front of it? That’s not just an oversight. That’s a security hole.
MCP updated their protocol in March 2025 specifically mandating OAuth as the mechanism to access remote MCP servers.
Remote MCP servers must enforce secure authorization, ensuring only authenticated actors can access sensitive tools and data.
Let's explore what implementing OAuth for MCP looks like in practice.
This means that if you are building a remote MCP Server, you need to implement OAuth 2.1 based authorization server that is responsible for minting tokens and ensuring that only the authorized actors can access the MCP Servers.
The practical approach is separating your concerns:
Think of it like this: the authorization server is your nightclub bouncer: it checks IDs and issues wristbands. The MCP server is the venue: it only admits people with the right wristband.
Implementing scopes in the OAuth flow gives you critical control:
mcp:exec:functions.weather: Can only call weather function
mcp:exec:functions.*: Can call any function
mcp:read:models: Can only read model information
Without scopes, you're essentially giving all-or-nothing access to your entire MCP server—and by extension, to all your backend systems it can reach.
Here’s what you’ll need to implement (and what to watch out for):
The good news? You don't need to reinvent this wheel. At Scalekit, we are launching a drop-in OAuth authorization server that attaches to your MCP server without major rewrites or migrations.
Scalekit provides turnkey auth infrastructure for MCP servers. Implementation takes minutes, not weeks.
Enterprise teams are rolling out MCPs into production pipelines—and the attack surface is expanding fast.
Stop shipping naked MCP servers. Sign up for early access instead!