Announcing CIMD support for MCP Client registration
Learn more
MCP authentication
May 21, 2025

MCP servers are the new backend: Let’s stop shipping them unsecured

Remember when APIs were the backbone of everything? They held the core business logic, acted as middleware for databases and file systems, and were the connective tissue of modern applications. We spent years perfecting them: stabilizing interfaces, versioning carefully, and building robust auth systems to protect them. With the advent of MCP servers, things are shifting to a more AI-based approach.

MCP: The new middleware reality

The landscape has shifted dramatically with AI applications. MCP provides a standardized way for applications to:

  • Share contextual information with LLMs
  • Expose tools and capabilities to AI systems
  • Enable composable integrations and workflows

In the early days, most MCP implementations were built as local servers, communicating via the STDIO protocol. Local MCP servers rely on users having the latest packages whenever the server team updates or upgrades their MCP server capabilities.

As AI-powered applications evolved, teams are extending MCP servers beyond local networks to support distributed agent-based systems. The introduction of remote HTTP based MCP servers opened up several possibilities, including actions in third-party APIs, automated workflows etc.

This transition, however, introduces significant security challenges. Unlike their predecessors operating within protected network boundaries, remote MCP Servers are exposed to potential threats across networks. This exposure demands robust security measures, particularly around authentication and authorization.  Resource servers, such as MCP servers, are responsible for enforcing access control and protecting sensitive business logic. Resource servers can advertise their metadata, including authorization server information, via a resource metadata url, which helps clients discover server capabilities and facilitates secure OAuth flows.

MCP servers are becoming the de facto for AI agent workflows. But there’s one glaring problem:

Almost all MCP servers are being shipped completely naked.

No auth. No identity layer. No idea who's calling them, what they're allowed to do, or how long they should be able to do it for. The architecture's flexibility and security can be enhanced by implementing authorization server discovery mechanisms, which enable MCP clients to locate and interact with the correct authorization servers efficiently.

Why do remote MCP servers demand auth?

If your MCP server is callable from an AI agent or a remote workflow…and there’s no authorization layer in front of it? That’s not just an oversight. That’s a security hole.

MCP updated their protocol in March 2025 specifically mandating OAuth as the mechanism to access remote MCP servers. Authorization server discovery is a key part of this mandate, allowing MCP clients to efficiently locate and interact with the correct authorization servers, ensuring secure and seamless client-server interactions.

Remote MCP servers must enforce secure authorization, ensuring only authenticated actors can access sensitive tools and data. MCP clients must be able to discover and interact with authorization servers to ensure proper OAuth flow. However, improper token validation or integration with external authorization servers can lead to authorization server issues, such as invalid access tokens or security vulnerabilities.

Let's explore what implementing OAuth for MCP looks like in practice.

Implementing an authorization server

This means that if you are building a remote MCP Server, you need to implement OAuth 2.1 based authorization server that is responsible for minting tokens and ensuring that only the authorized actors can access the MCP Servers.

The practical approach is separating your concerns:

  1. MCP server: Your MCP server that holds the valuable stuff, the business logic
  2. Authorization server: Your identity gatekeeper that issues and validates tokens as a separate entity, often called the auth server

Think of it like this: the authorization server is your nightclub bouncer: it checks IDs and, as the corresponding authorization server, issues wristbands (access tokens) to authorized clients. The MCP server is the venue: it only admits people with the right wristband.

OAuth flow for MCP servers

Scopes in OAuth

Implementing scopes in the OAuth flow gives you critical control. During the OAuth consent process, users are presented with the requested scopes so they can review and understand the permissions being asked for before granting access:

mcp:exec:functions.weather: Can only call weather function

mcp:exec:functions.*: Can call any function

mcp:read:models: Can only read model information

Authorization flows enable granular permission management by allowing you to specify and manage scopes according to the principle of least privilege. Without scopes, you're essentially giving all-or-nothing access to your entire MCP server—and by extension, to all your backend systems it can reach.

Granting consent for specific scopes is essential to enhance security and user experience, ensuring users or enterprises only approve the access that is necessary.

Practical tips for implementing OAuth authorization server

Here’s what you’ll need to implement (and what to watch out for):

  • Use dynamic client registration: Allow AI agents or tools to securely and automatically register themselves as OAuth clients using the OAuth 2.0 Dynamic Client Registration Protocol. Enable dynamic client registration on your authorization servers to allow clients to register at runtime. Don’t hardcode credentials. Support dynamic registration workflows with client metadata validation.
    • Dynamic registration reduces manual configuration and streamlines onboarding. Support dynamic registration workflows with client metadata validation.
    • Each OAuth client must have its own unique client_id and secret. The client_id is essential for identifying the application during authorization flows, and unique client_ids for each OAuth client are critical for security and compliance.
    • Supporting dynamic client registration is crucial for scalability and interoperability, especially in environments with many clients and authorization servers.
  • Always use PKCE (Proof Key for Code Exchange): Especially for public clients (e.g., browser-based agents or frontend tools). PKCE replaces the need for a client secret and protects against authorization code interception.
  • Support token introspection: MCP server requires a way to verify incoming tokens—check expiry, scopes, issuer, and subject. Use introspection or JWT validation based on your token format. Token endpoints are critical for secure token exchange processes.
  • Scopes as guardrails: Scopes determine what a token holder can do. Don’t skip this. Define granular scopes like:
    • tools:calendar.read
    • tools:crm.write
    • mcp:run:workflow-risk-check
    Avoid overly broad scopes like admin or full_access

💡 Pro tip: Prefix scopes by functional domain for clarity. MCP access logs become easier to audit this way.

  • Keep tokens short-lived: 5-30 minutes max. Use refresh tokens if needed. Avoid long-lived tokens: they’re liabilities if leaked. Use the refresh token to obtain new access tokens without requiring user reauthorization, ensuring secure connectivity.
  • Log and audit everything: Who got a token? What scopes? When did they call the MCP server? Audit logs are your first line of defense when things go wrong.

Common pitfalls to avoid: Auth for MCP

  • Skipping token validation at the resource server. Don’t assume it’s valid; verify issuer and signature.
  • Using opaque scopes like default, basic, etc. These don’t mean anything at audit time.
  • Embedding auth logic into MCP itself. Separate concerns; auth belongs at the edge.

Don't build it all from scratch

The good news? You don't need to reinvent this wheel. At Scalekit, we are launching a drop-in OAuth authorization server that attaches to your MCP server without major rewrites or migrations.

Scalekit provides turnkey auth infrastructure for MCP servers. Implementation takes minutes, not weeks (We actually built our own MCP server)

Next steps

Enterprise teams are rolling out MCPs into production pipelines—and the attack surface is expanding fast.

Stop shipping naked MCP servers. Check out Scalekit's MCP auth now.

FAQs

Why do remote MCP servers require dedicated OAuth protection?

Remote MCP servers are exposed across networks and act as gateways to sensitive business logic. Without a robust authentication layer like OAuth, these servers represent a significant security hole accessible to unauthorized actors. The updated Model Context Protocol mandates OAuth to ensure that only authenticated agents can interact with remote tools and data. Implementing this security layer prevents the common pitfall of shipping naked servers while providing the necessary identity context for every request. It transforms a vulnerable endpoint into a secure enterprise ready resource.

What are the benefits of using Dynamic Client Registration?

Dynamic Client Registration allows AI agents and third party tools to register as OAuth clients automatically at runtime. This removes the need for manual configuration and hardcoded credentials which are common points of failure in distributed systems. By leveraging the OAuth 2.0 Dynamic Client Registration Protocol engineering teams can scale their MCP ecosystems efficiently while maintaining strict metadata validation. This approach ensures each agent has a unique identity facilitating better compliance and granular control over which applications can request access to your specific MCP resources.

How do granular scopes improve MCP server security posture?

Granular scopes act as critical guardrails by enforcing the principle of least privilege for every AI agent. Instead of broad access scopes like mcp execution weather allow you to define exactly which functions or data a token holder can access. During the OAuth consent process users can review these specific permissions before granting access. This prevents an all or nothing scenario where a single compromised token exposes your entire backend. Well defined scopes also simplify auditing as logs clearly show which functional domains were accessed by specific agents.

Why should developers separate authorization logic from MCP servers?

Modern architecture dictates a clear separation of concerns between the resource server and the authorization server. The MCP server should focus exclusively on executing business logic and providing tool capabilities while a dedicated authorization server handles identity verification and token issuance. This modularity prevents the complexity of embedding auth logic directly into your application code. By treating the authorization server as an independent gatekeeper you can update security policies or rotate signing keys without redeploying your core MCP services ensuring a more resilient and maintainable AI infrastructure.

What role does PKCE play in securing MCP clients?

Proof Key for Code Exchange is essential for securing public clients like browser based agents or frontend tools. Since these clients cannot securely store a client secret PKCE provides a dynamic mechanism to protect against authorization code interception. It replaces static secrets with a temporary verifier ensuring that even if an attacker intercepts the code they cannot exchange it for an access token. For architects building remote MCP workflows implementing PKCE is a non negotiable requirement to maintain integrity in environments where client side security is difficult to guarantee.

How does token introspection facilitate secure tool execution?

Token introspection allows the MCP server to verify the validity of incoming access tokens in real time. By querying the authorization server the resource server can check token expiration issuer authenticity and the specific scopes granted to the caller. This process ensures that every tool execution is backed by a valid active credential. Without proper introspection or JWT validation an MCP server might process requests from revoked or expired tokens leading to unauthorized data access. It serves as the final checkpoint before an agent interacts with sensitive backend systems.

Why are short lived tokens recommended for AI agents?

Short lived tokens typically lasting between five and thirty minutes significantly reduce the window of opportunity for attackers if a token is leaked. In the fast moving landscape of AI agent workflows long lived tokens represent a major liability. By using short lived access tokens combined with refresh tokens systems can maintain secure connectivity without requiring constant user reauthorization. This practice ensures that even in the event of a credential compromise the potential damage is limited and the system can quickly return to a secure state through token expiration.

What are common pitfalls when implementing MCP authorization?

A frequent mistake is skipping token validation at the resource server level and assuming incoming requests are inherently safe. Another common pitfall is using opaque or generic scopes like admin which provide no meaningful context during security audits. Engineering teams also often struggle with embedding complex auth logic directly into their MCP code rather than using a centralized authorization server. Finally failing to log and audit every token issuance and tool call leaves the system without a defense line when security incidents occur making it impossible to perform effective forensic analysis.

How does Scalekit simplify OAuth for MCP servers?

Scalekit provides a drop in OAuth authorization server designed specifically for the Model Context Protocol. It allows developers to attach a robust identity layer to their MCP servers in minutes avoiding the need for complex internal builds or migrations. By offering turnkey infrastructure that supports dynamic client registration PKCE and granular scope management Scalekit helps engineering teams move from naked servers to enterprise grade security quickly. This approach allows CISOs and CTOs to confidently deploy AI agents into production pipelines while ensuring full compliance with the latest security standards and protocols.

No items found.
Ready to add auth to your MCP servers?
On this page
Share this article
Ready to add auth to your MCP servers?

Acquire enterprise customers with zero upfront cost

Every feature unlocked. No hidden fees.
Start Free
$0
/ month
1 million Monthly Active Users
100 Monthly Active Organizations
1 SSO connection
1 SCIM connection
10K Connected Accounts
Unlimited Dev & Prod environments