
A summary of all that we launched this week to make your AI Apps secure and efficient.
For the past year, teams have been moving fast to adopt the Model Context Protocol (MCP) as the standard way for AI agents to interact with real software. The idea is simple: instead of hitting raw APIs, agents call MCP-compliant tools described in natural language, fetching data, triggering workflows, and composing services with human-like reasoning.
But while adoption has accelerated, security hasn’t kept pace.
Today, most MCP servers are barely protected. They lack authentication, don’t enforce scopes, and offer no auditability. In many cases, they are just stateless HTTP endpoints left open to the world. What started as local scripts and CLI tools are now powering critical production workflows with the security posture of a hackathon prototype.
This week, we’re launching a new generation of MCP infrastructure to change that.
We’ve been quietly building and battle-testing an end-to-end MCP stack that is secure by default, easy to integrate, and ready for real workloads. Over the next few days, we’re rolling out three foundational upgrades to help teams ship secure, agent-ready MCP servers from day one.
Before launching anything publicly, we built a real MCP server for our own platform. It’s used internally, integrated with agents like Claude and ChatGPT, and exposes a full suite of tools for managing environments, users, organizations, and authentication connections.
What sets it apart is how it’s secured.
MCP clients and agents aren’t just querying LLMs anymore. They are making real API calls, triggering workflows, and acting on behalf of users. This shift fundamentally changes how authentication needs to work. It affects how tokens are issued, how scopes are granted, and how user context moves through an agentic workflow.
Every call in our server is authenticated using OAuth 2.1. Clients are dynamically registered. Tokens are short-lived and scope-restricted. PKCE flows, JWT validation, and audit logging are all built in. It runs on the same infrastructure we now offer to developers, and it’s open source.
You can fork it or study how it’s built.

The second piece is the OAuth foundation itself. In March 2025, the MCP spec officially mandated OAuth 2.1 for remote servers. That means exposing .well-known endpoints, issuing scoped access tokens, and handling the full lifecycle of authorization, including rotating secrets, introspecting tokens, and protecting public clients with PKCE.
We’re now shipping infrastructure designed specifically for this spec.
You can define granular scopes such as tools:calendar.read or mcp:exec:functions.forecast, issue refreshable tokens, and enforce access policies at the tool level. Whether you’re working with confidential agents or public integrations, the system gives you a clear and flexible way to manage permissions.
This moves authentication from a deferred chore into a core building block of the MCP stack.

The final piece of this launch is one that’s often overlooked: documentation.
Most developer docs today are written for humans, organized top to bottom with lots of prose and visual cues. But agents don’t read docs the way people do. They extract meaning, synthesize snippets, and build workflows from fragments.
So we reimagined how documentation should work when your developer interface is a language model.
All of our guides now include a “Copy for LLM context” button that generates structured Markdown, designed for tools like Claude Code, Cursor, and Windsurf. We’re also testing pipelines that generate llmstxt.org-compliant docs from our codebase, making it easy for agents to ingest, parse, and reason about how your tools work.
This is not just a UX improvement. It’s a fundamental shift in how developer experience is delivered. If you want to support AI-assisted workflows, your docs need to serve machines as well as people.
The past six months have shown us that agents are no longer just querying LLMs. They are creating pull requests, provisioning infrastructure, pulling CRM data, and chaining together tools across the stack. This is not just automation. It is execution at machine speed, and it requires a security model that can keep up.
Too often, teams skip authentication early in the build process, assuming it’s just for internal use. But over time, those prototypes become core infrastructure. Tokens get reused, secrets are hardcoded, scopes become inconsistent, and access becomes difficult to track.
Our goal this week is to help teams avoid that path entirely.
With a secure MCP server implementation, spec-compliant OAuth 2.1 support, and agent-ready documentation, we’re giving teams the tools to build secure, production-grade interfaces from the very first commit.
Authentication should not be duct-taped together. It should be the part that just works.
Stay tuned. We’ll be sharing more in the coming days. And if you're already building with MCP, now is the time to make sure your server is ready for agents, security, and scale.
The Model Context Protocol specification officially mandated OAuth 2.1 for remote servers starting in March 2025. This change ensures that AI agents and MCP clients interact with software through a standardized and secure framework. By requiring OAuth 2.1, the spec enforces the use of well known endpoints, PKCE for public clients, and granular scoping. Scalekit provides the necessary infrastructure to implement these requirements out of the box, allowing developers to focus on tool logic rather than complex security protocols while maintaining compliance with the evolving industry standards for agentic workflows.
Scalekit secures agentic workflows by providing a production ready stack that is built auth first. Unlike traditional HTTP endpoints that lack protection, our infrastructure uses OAuth 2.1 to issue short lived and scope restricted tokens. This ensures that every action taken by an AI agent is authenticated and authorized. We also include built in PKCE flows, JWT validation, and comprehensive audit logging. This approach transforms authentication from an afterthought into a foundational building block, enabling agents to perform critical tasks like infrastructure provisioning or CRM data retrieval with a robust and scalable security posture.
Granular scopes allow developers to define precise permissions for AI agents, such as restricting access to specific calendar tools or forecasting functions. Instead of granting broad API access, you can issue tokens limited to exactly what the agent needs for its current task. This minimizes the blast radius in case of a credential compromise and ensures that agents operate within strict guardrails. Scalekit infrastructure makes it easy to define and enforce these tool level policies, giving engineering teams and CISOs the confidence to deploy autonomous agents in production environments without sacrificing security or control.
Agent ready documentation requires a shift from human centric prose to structured formats that language models can easily parse. Scalekit addresses this by including a Copy for LLM context button in our guides, which generates structured Markdown for tools like Claude Code and Cursor. We also utilize pipelines to create llmstxt.org compliant documentation directly from codebases. This allows AI agents to ingest, synthesize, and reason about tool functionality more effectively. By serving both machines and humans, you improve the developer experience and increase the reliability of agentic integrations within your software ecosystem.
Teams often skip authentication in the early stages of building MCP servers, assuming the tools are only for internal use or prototypes. However, these prototypes frequently evolve into core production infrastructure over time. Without a robust security model from the start, you risk hardcoding secrets, reusing tokens, and maintaining inconsistent scopes. This creates significant technical debt and security vulnerabilities. Scalekit provides the tools to build secure, production grade interfaces from the very first commit, ensuring that your AI apps are ready for scale and enterprise requirements without needing a painful security overhaul later.
Proof Key for Code Exchange or PKCE is a critical component of the OAuth 2.1 specification, particularly for protecting public clients and agents. In the context of MCP, PKCE prevents authorization code injection attacks and ensures that the client requesting the token is the same one that initiated the flow. Scalekit integrates PKCE into its MCP server infrastructure to provide a higher level of security for remote interactions. This is essential for AI agents acting on behalf of users in environments where client secrets cannot be securely stored, making the authentication process resilient against common web based threats.
Scalekit supports Machine to Machine and Agent to Agent authentication by implementing standardized OAuth 2.1 flows tailored for the Model Context Protocol. Our infrastructure allows for dynamic client registration and the issuance of scoped access tokens specifically designed for non human actors. This enables seamless and secure communication between different software services and AI agents. By providing centralized identity management and clear permission sets, we ensure that automated workflows are both auditable and secure. This architecture is vital for modern B2B applications where agents must autonomously chain together multiple tools across a complex tech stack.
Yes, Scalekit has open sourced a production ready MCP server that you can fork or study. It serves as a reference implementation for building auth first AI applications. This server is integrated with agents like Claude and ChatGPT and handles real API calls for managing users and authentication connections. It demonstrates how to implement OAuth 2.1, short lived tokens, and audit logging in a real world scenario. By providing this open source resource, we aim to help the developer community adopt best practices for secure agentic workflows and accelerate the deployment of compliant MCP infrastructure.
The Model Context Protocol provides a standardized way for AI agents to interact with software, moving beyond raw API calls to tool based reasoning. It allows agents to fetch data and trigger workflows using natural language descriptions. The significance lies in creating a unified interface that works across various LLMs and applications. By mandating security standards like OAuth 2.1, the MCP spec ensures this ecosystem remains secure as it scales. Scalekit follows this specification closely, providing the necessary tools to build compliant servers that can safely handle the execution speed and complexity of modern AI agents.