MCP launch week summary

A summary of all that we launched this week to make your AI Apps secure and efficient.

For the past year, teams have been moving fast to adopt the Model Context Protocol (MCP) as the standard way for AI agents to interact with real software. The idea is simple: instead of hitting raw APIs, agents call MCP-compliant tools described in natural language, fetching data, triggering workflows, and composing services with human-like reasoning.

But while adoption has accelerated, security hasn’t kept pace.

Today, most MCP servers are barely protected. They lack authentication, don’t enforce scopes, and offer no auditability. In many cases, they are just stateless HTTP endpoints left open to the world. What started as local scripts and CLI tools are now powering critical production workflows with the security posture of a hackathon prototype.

This week, we’re launching a new generation of MCP infrastructure to change that.

We’ve been quietly building and battle-testing an end-to-end MCP stack that is secure by default, easy to integrate, and ready for real workloads. Over the next few days, we’re rolling out three foundational upgrades to help teams ship secure, agent-ready MCP servers from day one.

1. A Production-ready MCP Server, built auth-first

Before launching anything publicly, we built a real MCP server for our own platform. It’s used internally, integrated with agents like Claude and ChatGPT, and exposes a full suite of tools for managing environments, users, organizations, and authentication connections.

What sets it apart is how it’s secured.

MCP clients and agents aren’t just querying LLMs anymore. They are making real API calls, triggering workflows, and acting on behalf of users. This shift fundamentally changes how authentication needs to work. It affects how tokens are issued, how scopes are granted, and how user context moves through an agentic workflow.

Every call in our server is authenticated using OAuth 2.1. Clients are dynamically registered. Tokens are short-lived and scope-restricted. PKCE flows, JWT validation, and audit logging are all built in. It runs on the same infrastructure we now offer to developers, and it’s open source.

You can fork it or study how it’s built.

Check out Scalekit’s MCP Server

2.Spec-compliant OAuth 2.1 for MCP servers

The second piece is the OAuth foundation itself. In March 2025, the MCP spec officially mandated OAuth 2.1 for remote servers. That means exposing .well-known endpoints, issuing scoped access tokens, and handling the full lifecycle of authorization, including rotating secrets, introspecting tokens, and protecting public clients with PKCE.

We’re now shipping infrastructure designed specifically for this spec.

You can define granular scopes such as tools:calendar.read or mcp:exec:functions.forecast, issue refreshable tokens, and enforce access policies at the tool level. Whether you’re working with confidential agents or public integrations, the system gives you a clear and flexible way to manage permissions.

This moves authentication from a deferred chore into a core building block of the MCP stack.

Get started with securing your MCP server with drop-in OAuth

3. Developer docs that work for agents, too

The final piece of this launch is one that’s often overlooked: documentation.

Most developer docs today are written for humans, organized top to bottom with lots of prose and visual cues. But agents don’t read docs the way people do. They extract meaning, synthesize snippets, and build workflows from fragments.

So we reimagined how documentation should work when your developer interface is a language model.

All of our guides now include a “Copy for LLM context” button that generates structured Markdown, designed for tools like Claude Code, Cursor, and Windsurf. We’re also testing pipelines that generate llmstxt.org-compliant docs from our codebase, making it easy for agents to ingest, parse, and reason about how your tools work.

This is not just a UX improvement. It’s a fundamental shift in how developer experience is delivered. If you want to support AI-assisted workflows, your docs need to serve machines as well as people.

Explore Scalekit Docs

What This Unlocks

The past six months have shown us that agents are no longer just querying LLMs. They are creating pull requests, provisioning infrastructure, pulling CRM data, and chaining together tools across the stack. This is not just automation. It is execution at machine speed, and it requires a security model that can keep up.

Too often, teams skip authentication early in the build process, assuming it’s just for internal use. But over time, those prototypes become core infrastructure. Tokens get reused, secrets are hardcoded, scopes become inconsistent, and access becomes difficult to track.

Our goal this week is to help teams avoid that path entirely.

With a secure MCP server implementation, spec-compliant OAuth 2.1 support, and agent-ready documentation, we’re giving teams the tools to build secure, production-grade interfaces from the very first commit.

Authentication should not be duct-taped together. It should be the part that just works.

Stay tuned. We’ll be sharing more in the coming days. And if you're already building with MCP, now is the time to make sure your server is ready for agents, security, and scale.

No items found.
On this page
Share this article
Ready to make MCP Servers production ready?

Acquire enterprise customers with zero upfront cost

Every feature unlocked. No hidden fees.
Start Free
$0
/ month
1 FREE SSO/SCIM connection each
1000 Monthly active users
25 Monthly active organizations
Passwordless auth
API auth: 1000 M2M tokens
MCP auth: 1000 M2M tokens