Remember when APIs were the backbone of everything? They held the core business logic, acted as middleware for databases and file systems, and were the connective tissue of modern applications. We spent years perfecting them: stabilizing interfaces, versioning carefully, and building robust auth systems to protect them.
MCP: The new middleware reality
But the landscape has shifted dramatically with AI applications. MCP provides a standardized way for applications to:
- Share contextual information with LLMs
- Expose tools and capabilities to AI systems
- Enable composable integrations and workflows
In the early days, most MCP implementations were built as local servers, communicating via the STDIO protocol. Local MCP servers rely on users having the latest packages whenever the server team updates or upgrades their MCP server capabilities.
As AI-powered applications evolved, teams are extending MCP servers beyond local networks to support distributed agent-based systems. The introduction of remote HTTP based MCP servers opened up several possibilities, including actions in third-party APIs, automated workflows etc.
This transition, however, introduces significant security challenges. Unlike their predecessors operating within protected network boundaries, remote MCP Servers are exposed to potential threats across networks. This exposure demands robust security measures, particularly around authentication and authorization.
MCP servers are becoming the de facto for AI agent workflows. But there’s one glaring problem:
Almost all MCP servers are being shipped completely naked.
No auth. No identity layer. No idea who's calling them, what they're allowed to do, or how long they should be able to do it for.
Why do remote MCP servers demand auth?
If your MCP server is callable from an AI agent or a remote workflow…and there’s no authorization layer in front of it? That’s not just an oversight. That’s a security hole.
MCP updated their protocol in March 2025 specifically mandating OAuth as the mechanism to access remote MCP servers.
Remote MCP servers must enforce secure authorization, ensuring only authenticated actors can access sensitive tools and data.
Let's explore what implementing OAuth for MCP looks like in practice.
Implementing an authorization server
This means that if you are building a remote MCP Server, you need to implement OAuth 2.1 based authorization server that is responsible for minting tokens and ensuring that only the authorized actors can access the MCP Servers.
The practical approach is separating your concerns:
- MCP server: Your MCP server that holds the valuable stuff, the business logic
- Authorization server: Your identity gatekeeper that issues tokens
Think of it like this: the authorization server is your nightclub bouncer: it checks IDs and issues wristbands. The MCP server is the venue: it only admits people with the right wristband.

Scopes in OAuth
Implementing scopes in the OAuth flow gives you critical control:
mcp:exec:functions.weather: Can only call weather function
mcp:exec:functions.*: Can call any function
mcp:read:models: Can only read model information
Without scopes, you're essentially giving all-or-nothing access to your entire MCP server—and by extension, to all your backend systems it can reach.
Practical tips for implementing OAuth authorization server
Here’s what you’ll need to implement (and what to watch out for):
- Use dynamic client registration: Allow AI agents or tools to register securely. Don’t hardcode credentials. Support dynamic registration workflows with client metadata validation.
- Always use PKCE (Proof Key for Code Exchange): Especially for public clients (e.g., browser-based agents or frontend tools). PKCE replaces the need for a client secret and protects against authorization code interception.
- Support token introspection: MCP server requires a way to verify incoming tokens—check expiry, scopes, issuer, and subject. Use introspection or JWT validation based on your token format.
- Scopes as guardrails: Scopes determine what a token holder can do. Don’t skip this. Define granular scopes like:
- tools:calendar.read
- tools:crm.write
- mcp:run:workflow-risk-check
- Keep tokens short-lived: 5-30 minutes max. Use refresh tokens if needed. Avoid long-lived tokens: they’re liabilities if leaked.
- Log and audit everything: Who got a token? What scopes? When did they call the MCP server? Audit logs are your first line of defense when things go wrong.
Common pitfalls to avoid: Auth for MCP
- Skipping token validation at the resource server. Don’t assume it’s valid; verify issuer and signature.
- Using opaque scopes like default, basic, etc. These don’t mean anything at audit time.
- Embedding auth logic into MCP itself. Separate concerns; auth belongs at the edge.
Don't build it all from scratch
The good news? You don't need to reinvent this wheel. At Scalekit, we are launching a drop-in OAuth authorization server that attaches to your MCP server without major rewrites or migrations.
Scalekit provides turnkey auth infrastructure for MCP servers. Implementation takes minutes, not weeks.
Next steps
Enterprise teams are rolling out MCPs into production pipelines—and the attack surface is expanding fast.
Stop shipping naked MCP servers. Sign up for early access instead!