
Arcade made a bet early that the rest of the category is still catching up to - that agent tool calling is fundamentally an authorization problem, and that solving it properly through OAuth delegation rather than service accounts and bot tokens is what separates production agents from demos.
That bet was right. The question worth asking now is whether Arcade has built enough around that conviction to serve your use case, or whether a different tool fits better.
When developers first wire an LLM to external tools, credentials are usually an afterthought: environment variables, a shared API key, a bot token with admin access. This works until it doesn't.
The problems that arrive in production are predictable:
This is the problem space Arcade is explicitly trying to own. Worth understanding before comparing alternatives.

Arcade was founded by executives from Okta - people whose entire professional context was enterprise authorization done correctly. That lineage is visible throughout the product.

Scalekit and Arcade share the same foundational conviction: auth and authz are the right starting point for agent connectivity, not the last thing you add. The difference is where that conviction gets applied and at what scale.
Arcade solves the per-user delegation problem elegantly. Scalekit solves that and the org-level authorization problem that emerges when you're running agents across thousands of enterprise customers, each with different permission requirements and compliance expectations.
Where Scalekit extends the Arcade model:
The authz layer goes deeper. Per-connector scope configuration means you define, per integration, what each org's agents are permitted to do, and that enforcement happens at the infrastructure layer, before the API is touched, regardless of what the agent requests. An agent attempting an operation outside its configured scope doesn't get an API error. It never reaches the API.
Credential isolation is per-tenant by design. Each org's tokens live in an isolated vault. Cross-tenant credential access isn't a misconfiguration risk -it's not architecturally possible.
Developer experience:
Connector catalog: ~300 and growing, built depth-first. Each connector covers the operations agents actually need to complete tasks, not just the most accessible API surface.
The Von case:
Von's agents act inside Salesforce, Gong, HubSpot, Google Drive on behalf of individual sales team members. Every tool call needed: a valid scoped token for that specific user, kept current without human intervention, with no standing agent credentials, producing an audit trail that could survive enterprise procurement review.
"Von touches identity in four places: user auth, embedded SSO, token store for integrations, and an AI tool calling proxy. Having all of that managed by Scalekit behind the scenes is what let us ship fast without stitching together parallel systems."
~ Venu Madhav Kattagoni, Head of Engineering, Von
Each new connector Von added inherited the same auth and authz pattern. The identity layer didn't change. The team spent their time on revenue intelligence, not credential plumbing.
Best fit: Production-grade agent products where the Arcade model is right in principle but needs to scale across many enterprise customers with different permission requirements, compliance expectations, and audit demands.

The trade-off from Arcade is clear: Composio has roughly 10x the integration catalog and dramatically faster time-to-first-tool-call. The auth model is weaker - it works, but it doesn't implement proper OAuth delegation - and the closed-source tools mean you're consuming, not owning, your tool definitions. There's also no per-tenant authorization layer.
Best fit: Prototyping and single-tenant use cases where integration breadth matters more than auth model correctness. For a full breakdown, see Composio alternatives.

Nango approaches the tool calling problem from the integration build layer rather than the auth layer. Tool definitions are TypeScript functions in your repo - you write them (or have a coding agent write them), deploy to Nango's runtime, and Nango handles execution, auth, retries, and rate limiting.
The observability story is the strongest in this comparison: full API request/response visibility, custom log messages, OpenTelemetry export. 700+ APIs supported. Open source. Usage-based pricing that doesn't scale with customer count.
Best fit: Teams that want code-level ownership of their integration logic alongside a managed execution runtime.

Merge's governance and DLP features are mature: PII scanning on request/response, rule enforcement per tool pack, granular audit logs. The compliance story is credible and battle-tested.
The structural constraint: Merge Agent Handler is a layer on top of a Unified API designed for deterministic SaaS integration code. The normalized data schemas strip out API-specific semantics agents need for reliable tool use. Tool definitions aren't configurable per tenant.
The auth model is also meaningfully different from Arcade's: Merge's credential management was designed for developer-initiated integration code, not per-user OAuth delegation for agent-initiated actions.
Best fit: Teams already invested in the Merge ecosystem. Not a natural fit for teams who chose Arcade specifically for its auth model.

ActionKit gives agents 1,000+ tools via a single API call. The embedded Connect Portal for end-user authorization is polished. The constraint relative to Arcade: Paragon's auth model reflects its embedded iPaaS heritage. Per-tenant authz enforcement and audit infrastructure are less developed.
Best fit: ISVs where integration breadth and embedded auth UX matter more than auth model correctness.
The teams most likely to be evaluating Arcade alternatives are not questioning the auth-first premise, they're testing whether Arcade has built enough around that premise for their specific use case.
The gaps that surface most often: catalog coverage that doesn't reach their integration requirements, enterprise governance features that aren't comprehensive enough for their customers' security reviews, or production confidence that comes from track record rather than architecture alone.
The alternatives above each make a different trade-off around the same core problem. The right choice is the one that maps to your actual constraints in production, not the one that wins the feature checklist.