The big advantage of MCP over OpenAPI is that it is very clear about auth. OpenAPI supports too many different auth mechanisms, and the schema doesn't necessarily have enough information for a robot to be able to complete the auth flow.
*gets up on soap box*
With the announcement of this new "code mode" from Anthropic and Cloudflare, I've gotta rant about LLMs, MCP, and tool-calling for a second
Let's all remember where this started
LLMs were bad at writing JSON
So OpenAI asked us to write good JSON schemas & OpenAPI specs
But LLMs sucked at tool calling, so it didn't matter. OpenAPI specs were too long, so everyone wrote custom subsets
Then LLMs got good at tool calling (yay!) but everyone had to integrate differently with every LLM
Then MCP comes along and promises a write-once-integrate everywhere story.
It's OpenAPI all over again. MCP is just a OpenAPI with slightly different formatting, and no real justification for doing the same work we did to make OpenAPI specs and but different
MCP itself goes through a lot of iteration. Every company ships MCP servers. Hype is through the roof. Yet use of MCP use is super niche
But now we hear MCP has problems. It uses way too many tokens. It's not composable.
So now Cloudflare and Anthropic tell us it's better to use "code mode", where we have the model write code directly
Now this next part sounds like a joke, but it's not. They generate a TypeScript SDK based on the MCP server, and then ask the LLM to write code using that SDK
Are you kidding me? After all this, we want the LLM to use the SAME EXACT INTERFACE that human programmers use?
I already had a good SDK at the beginning of all this, automatically generated from my OpenAPI spec (shout-out @StainlessAPI)
Why did we do all this tool calling nonsense? Can LLMs effectively write JSON and use SDKs now?
The central thesis of my rant is that OpenAI and Anthropic are platforms and they run "app stores" but they don't take this responsibility and opportunity seriously. And it's been this way for years. The quality bar is so much lower than the rest of the stuff they ship. They need to invest like Apple does in Swift and XCode. They think they're an API company like Stripe, but their a platform company like an OS.
I, as a developer, don't want to build a custom chatgpt clone for my domain. I want to ship chatgpt and claude apps so folks can access my service from the AI they already use
Thanks for coming to my TED talk
Nov 8, 2025 · 5:21 PM UTC
Maybe an agent could read the docs and write code to auth. But we don't actually want that, because it implies the agent gets access to the API token! We want the agent's harness to handle that and never reveal the key to the agent.
But we can't have every harness implementing every auth mechanism ever. We'd at least have to say "must be OAuth2". But even then, client registration is a problem.
OAuth has always assumed that the client knows what API it's talking to, and so the client's developer can register the client with that API in advance to get a client_id/client_secret pair. Agents, though, don't know what MCPs they'll talk to in advance.
So MCP requires OAuth dynamic client registration (RFC 7591), which practically nobody actually implemented prior to MCP. DCR might as well have been introduced by MCP, and may actually be the most important unlock in the whole spec.
(RFC 7591 has problems, incidentally, and is likely to be replaced by something better. But that doesn't change the fact that dynamic registration was mostly not supported at all prior to MCP.)
A second benefit of MCP -- which code mode perhaps negates somewhat -- is that it forced people to design simpler API surfaces that an agent could understand and work with. Many APIs are just ridiculously complicated, and agents are more easily overwhelmed than humans.
OpenAPI is designed to describe HTTP / RESTful APIs, which are particularly difficult to think about (even for humans). MCP presents a simple function call API instead: much easier.
My spicy opinion is that REST was a mistake all along (even before AI entered the picture). REST is all about forcing programming interfaces into the framework of HTTP, which wasn't ever really designed for that.
REST feels good because it fits into HTTP and we have all this infrastructure and understanding around HTTP on the net. But it's actually unnatural. Nobody writes in-process APIs this way. Why should network APIs be so different?
IMO the best future for APIs would be if we ditched REST and instead designed network APIs more similarly to in-process APIs, which are easier to reason about for AI -- and also for humans.
It also seems vastly easier to think about sandboxing and permissions when dealing with TypeScript APIs instead of REST. I illustrated this in this talk I gave recently: piped.video/watch?v=xUj4HQt_…
But I think MCP may have dumbed down interfaces a bit *too* much. With code mode we now realize that AI can handle more complexity if it's presented as code instead of "tool calls". But MCP only lets you express a flat set of procedures, not rich OOP or functional interfaces.
I'm hoping to solve this with Cap'n Web, our RPC protocol that supports full bidirectional calling, higher-order functions, object-capabilities, streaming, etc. Basically lets you expose a rich TypeScript interface over the network. blog.cloudflare.com/capnweb-…
So the direction I'm excited to explore is MCP's auth framework + Cap'n Web APIs specified as TypeScript. But MCPish auth + REST APIs specified with OpenAPI is another route. I'm guessing we'll see both and it'll be interesting.









