Announcing CIMD support for MCP Client registration
Learn more
API Authentication
Jul 21, 2025

OAuth 2.0 best practices for secure APIs: RFC 9700

Srinivas Karre
Founding Engineer

Could your OAuth setup quietly be leaking customer data?

OAuth 2.0 is foundational for API security, but subtle implementation mistakes can expose sensitive customer data. RFC 9700 (also known as OAuth 2.0 Best Current Practice) provides concrete, real-world advice to harden OAuth implementations and avoid common vulnerabilities.

This guide translates RFC 9700 into actionable practices for developers building secure, high-performance APIs.

Why RFC 9700 matters for your APIs

OAuth 2.0 offers flexibility, but this comes with risks. Older or insecure flows, such as implicit grants and resource owner password credentials (ROPC), are vulnerable to attacks like token interception or replay. RFC 9700 deprecates insecure methods and strengthens OAuth flows with mandatory security measures like PKCE (Proof Key for Code Exchange).

What exactly are implicit grants and ROPC?

Implicit grants: An OAuth 2.0 flow where access tokens are returned directly in the URL fragment. Designed initially for browser-based JavaScript apps. However, tokens can leak through browser history, referrer headers, or logs. Here’s an example of an implicit grant:

GET https://auth.example.com/authorize? response_type=token& client_id=frontend-app& redirect_uri=https://app.example.com/callback

Resource Owner Password Credentials (ROPC): A flow where users directly give their username/password to the client app, which exchanges these credentials for an access token. This exposes user credentials directly to the client app, increasing risk. Here’s an example of ROPC flow.

POST https://auth.example.com/token Content-Type: application/x-www-form-urlencoded grant_type=password &username=user@example.com &password=userpassword &client_id=client-app

RFC 9700 explicitly recommends dropping these insecure flows.

The recommended secure flow: Authorization code with PKCE

The authorization code flow is OAuth’s most secure flow. Instead of directly sending tokens, the authorization server first returns an authorization code that the app exchanges for tokens. PKCE enhances this further.

What is PKCE (Proof Key for Code Exchange)?

PKCE is a security mechanism that prevents interception attacks by associating each authorization request with a unique secret (called a code_verifier). The server receives only a hashed version (code_challenge) during authorization, ensuring tokens are delivered securely only to authorized clients. PKCE is mandatory, even for server-side apps, in RFC 9700.

Example authorization request (with PKCE):

// Generate PKCE verifier and challenge const crypto = require('crypto'); function generateCodeVerifier() { return crypto.randomBytes(32).toString('base64url'); } function generateCodeChallenge(verifier) { return crypto.createHash('sha256').update(verifier).digest('base64url'); } const codeVerifier = generateCodeVerifier(); const codeChallenge = generateCodeChallenge(codeVerifier); // Authorization request example const authUrl = new URL('https://auth.ecommerce.com/authorize'); authUrl.searchParams.set('response_type', 'code'); authUrl.searchParams.set('client_id', 'shop-app'); authUrl.searchParams.set('redirect_uri', 'shopapp://callback'); authUrl.searchParams.set('scope', 'orders.read profile'); authUrl.searchParams.set('code_challenge', codeChallenge); authUrl.searchParams.set('code_challenge_method', 'S256'); console.log(authUrl.toString());

code_verifier: Secret randomly generated by client (never shared openly).

code_challenge: SHA-256 hashed version of the code_verifier sent to the server.

code_challenge_method: Hashing algorithm, always use S256.

Exchanging authorization code for tokens (with PKCE)

After the user authenticates, the server responds with an authorization code. The app exchanges this code, including the original code_verifier, for an access token.

// Example token exchange request const axios = require('axios'); async function exchangeCodeForToken(authCode, codeVerifier) { const response = await axios.post('https://auth.ecommerce.com/token', { grant_type: 'authorization_code', client_id: 'shop-app', redirect_uri: 'shopapp://callback', code: authCode, code_verifier: codeVerifier }, { headers: { 'Content-Type': 'application/x-www-form-urlencoded' } }); return response.data; }

Key RFC 9700 recommendations summarized

  • Always use Authorization Code flow with PKCE. This prevents token interception.
  • Stop using implicit grants and ROPC. Avoids token exposure and direct credential handling.
  • Validate redirect URIs strictly. Always require exact matches to prevent redirects to attacker-controlled URLs.
  • Use minimal scopes and short-lived tokens. Reduces impact if tokens are compromised.
  • Securely handle refresh tokens. Rotate frequently and store securely (HTTP-only cookies, secure storage).
  • Confidential clients must use strong auth. Prefer JWT client assertions or mutual TLS (mTLS) instead of client secrets in public environments.
  • Detect token misuse. Monitor token replay and enforce uniqueness (jti).

Common developer mistakes (and how to avoid them)

Mistake: Not validating redirect URIs exactly.

Correct approach: Perform strict string equality checks.

// Simple redirect URI validation example function isValidRedirectUri(requestedUri, registeredUri) { return requestedUri === registeredUri; }

Mistake: Using PKCE incorrectly or insecurely (e.g., weak random strings).

Correct approach: Always generate cryptographically strong verifiers (use Node’s crypto library).

Here's a practical, secure JavaScript snippet (using Node.js's built-in crypto library) for generating cryptographically strong PKCE verifiers.

const crypto = require('crypto'); // Generate a cryptographically strong code verifier function generateCodeVerifier() { return crypto.randomBytes(32).toString('base64url'); } // Generate a corresponding code challenge (S256) function generateCodeChallenge(codeVerifier) { return crypto.createHash('sha256') .update(codeVerifier) .digest('base64url'); } // Example usage: const codeVerifier = generateCodeVerifier(); const codeChallenge = generateCodeChallenge(codeVerifier); console.log('Code Verifier:', codeVerifier); console.log('Code Challenge:', codeChallenge);

generateCodeVerifier(): Creates a random, high-entropy string using crypto.randomBytes(). This is ideal for secure OAuth PKCE implementations.

generateCodeChallenge(): Hashes the verifier with SHA-256 as required for the PKCE flow. The challenge is safe to transmit publicly in authorization requests.

Edge cases and when not to strictly apply RFC 9700

  • Low-risk, internal-only systems might not require all RFC 9700 measures.
  • Extremely resource-constrained environments might find cryptographic requirements challenging.

But for customer-facing, sensitive APIs, RFC 9700 practices should be followed rigorously.

Developer checklist for RFC 9700 compliance

  • Use Authorization Code flow with PKCE (S256)
  • No implicit or password grants
  • Strict redirect URI validation
  • Minimal, explicit token scopes
  • Refresh tokens rotated securely
  • Confidential clients use strong authentication
  • Access tokens are short-lived
  • Token misuse detection (replay, injection, mix-up)

Conclusion: Strengthening OAuth security

RFC 9700 provides essential, actionable guidance that significantly enhances OAuth 2.0 security. By clearly outlining secure flows (Authorization Code with PKCE), explicitly discouraging risky methods (implicit and password grants), and highlighting critical implementation details like strict redirect validation and secure token handling, RFC 9700 helps developers build APIs resistant to real-world OAuth vulnerabilities.

Adopting RFC 9700 ensures:

  • Reduced risk of token leaks and interception.
  • Prevention of common attacks like token replay, injection, and mix-up.
  • Clear, standardized OAuth implementations that are easier to maintain.

For developers serious about protecting customer data and maintaining trust, RFC 9700 is a practical blueprint for secure OAuth.

FAQs

Why is the implicit grant flow now considered insecure?

The implicit grant flow exposes access tokens directly in the URL fragment. This exposure makes tokens vulnerable to leakage through browser history, referrer headers, or system logs. RFC 9700 recommends replacing this legacy method with the authorization code flow combined with PKCE. By moving token delivery to a secure backchannel exchange, developers eliminate the primary attack surface found in browser based applications. This shift is critical for modern B2B applications where protecting customer data and session integrity remains the top priority for engineering teams and CISOs who want to avoid accidental data exposure.

How does PKCE improve security for server side applications?

PKCE introduces a dynamic secret called a code verifier to the authorization process. The client sends a hashed version of this secret initially and provides the raw secret only during the token exchange phase. This mechanism ensures that even if an attacker intercepts the authorization code, they cannot exchange it for a token without the original verifier. RFC 9700 now mandates PKCE for all client types, including server side apps. This prevents interception attacks and provides a cryptographically strong binding between the initial request and the final token issuance, significantly hardening the entire OAuth lifecycle for high performance APIs.

Why should organizations deprecate the Resource Owner Password flow?

The Resource Owner Password Credentials flow requires users to share their primary credentials directly with the client application. This practice increases the risk of credential theft and bypasses the security benefits of centralized identity providers. RFC 9700 explicitly deprecates this flow because it creates a massive trust burden on the client app and complicates the implementation of multi factor authentication. Modern B2B architectures should instead use redirected flows or federated identity to ensure that sensitive user passwords never touch the application code, thereby reducing the overall blast radius of potential security breaches in enterprise environments.

What are the security risks of lax redirect URI validation?

Lax redirect URI validation allows attackers to intercept authorization codes by redirecting users to malicious endpoints. If an application accepts wildcard matches or subdomains, an attacker could craft a request that sends sensitive codes to a server they control. RFC 9700 insists on strict string equality checks for all redirect URIs. This simple but effective practice prevents open redirector vulnerabilities and ensures that tokens are only delivered to pre approved, trusted environments. Consistent validation is a cornerstone of secure OAuth implementations, protecting both service providers and their end users from sophisticated phishing and interception attempts.

How should developers implement secure machine to machine authentication?

For machine to machine or agent to agent communication, RFC 9700 and modern standards suggest moving away from simple client secrets. Instead, developers should utilize strong authentication methods like JWT client assertions or mutual TLS. These methods provide higher assurance and are less susceptible to accidental exposure in configuration files or logs. In the context of AI agents and MCP servers, utilizing secure M2M flows ensures that autonomous systems can interact with APIs without human intervention while maintaining a high security posture. Implementing these robust authentication patterns is vital for scaling enterprise grade B2B integrations safely and efficiently.

Why is token rotation essential for maintaining secure user sessions?

Refresh token rotation is a defensive strategy where a new refresh token is issued every time the current one is used. This process ensures that if a token is stolen, the original holder and the attacker will eventually conflict, allowing the authorization server to detect the anomaly and revoke all associated sessions. RFC 9700 emphasizes the importance of this technique alongside short lived access tokens. By limiting the lifespan of credentials and rotating them frequently, organizations can minimize the window of opportunity for attackers and improve their overall ability to detect and respond to unauthorized access within their systems.

How do AI agents utilize OAuth for secure tool access?

AI agents often require delegated access to external tools and data via MCP servers. Using OAuth with PKCE allows these agents to obtain scoped permissions without ever handling user credentials directly. This architectural approach follows the principle of least privilege by using minimal scopes tailored to the specific task the agent needs to perform. As AI agents become more autonomous in B2B environments, implementing RFC 9700 compliant flows ensures that their access is auditable, revocable, and secure. This foundation allows organizations to leverage AI capabilities while strictly adhering to enterprise security policies and complex compliance requirements.

What role does RFC 9700 play in modern B2B authorization?

RFC 9700 serves as a comprehensive blueprint for modernizing B2B authorization by consolidating years of security lessons into a single standard. It shifts the industry toward the authorization code flow with PKCE as the universal best practice. For engineering managers and CTOs, adopting this standard means reduced technical debt and a more resilient security architecture. By following these guidelines, B2B platforms can offer their customers a secure and standardized way to integrate services, ensuring that data exchange is protected against common vulnerabilities like token injection, mix up attacks, and unauthorized credential exposure across complex distributed systems.

How can organizations detect and prevent unauthorized token replay attacks?

To prevent token replay attacks, RFC 9700 recommends several strategies including the use of unique token identifiers and short lived credentials. Authorization servers should enforce that each authorization code is used exactly once. If a code is presented a second time, the server must revoke all tokens issued from that original request. Additionally, implementing sender constrained tokens using mTLS or DPoP provides an extra layer of protection by binding the token to the specific client that requested it. These measures collectively ensure that intercepted tokens cannot be reused by malicious actors, maintaining the integrity of the authentication system.

No items found.
Secure your APIs with OAuth
On this page
Share this article
Secure your APIs with OAuth

Acquire enterprise customers with zero upfront cost

Every feature unlocked. No hidden fees.
Start Free
$0
/ month
1 million Monthly Active Users
100 Monthly Active Organizations
1 SSO connection
1 SCIM connection
10K Connected Accounts
Unlimited Dev & Prod environments