On MCP and Knowing When to Care
Net API Notes for 2025-04-29, Issue 250
To the annals of arbitrary milestones, I am pleased to add THE 250TH EDITION OF NET API NOTES! That is 250 attempts to weigh claims, make value judgments, and gently interrogate what's happening in our particular discipline. To zhuzh up this newsletter for this occasion, I'm not doing a how-to or a top-10 list. Instead, I want to reflect on how we determine what's worth caring about. Today's buzz around Model Control Protocol, or MCP, is the perfect jumping-off point; not because it is the most important thing happening, but because the reaction to it says a lot about our collective behavior in the face of newness, complexity, and the algorithmically amplified promise of relevance.

What is MCP (really), and why are we discussing it?
If you haven't heard of MCP before now, I'm jealous. Among a particular type of habitually online, API-type, MCP has been the dominant topic for the last several months.
Proposed by Anthropic at the end of last year, the Model Context Protocol (MCP) is an attempt to make APIs more accessible to AI agents. MCP provides a standardized way for API providers to expose semantic information about what their APIs do, so that an agent (triggered from an interface, like Anthorpic's Claude) can not only understand how to call them, but also when it might want to.
The appeal is easy to understand: rather than building a unique integration for each API an agent might call (something difficult to scale if you're a company like Anthropic), MCP provides a structured, machine-readable description of an API's intent, purpose, and context, allowing an LLM to reason about when and how to use them.
It is worth noting that the information that MCP provides is often the same semantic intent that could already be expressed in an OpenAPI description. The problem with those OpenAPI fields - summary, description, examples, even use-case context that consuming developers would benefit from - are frequently left blank, underwritten, or overly terse in practice. Not because it's impossible but because teams often ship the minimal viable description and call it done. The result is a landscape of APIs that speak in code but say very little. MCP doesn't change that pattern so much as wrap it in a new one, requiring the same neglected semantics, but this time dictated by protocol rather than best practice.
It does make for some really interesting demos. Michael Coté has a great series where he leverages AI and MCP to play D&D. Even if you don't program in Spring Java, like Coté, the videos provide a relatable demonstration of why LLMs need access to authoritative, stable, "ground-truth" data rather than plausible-sounding, probabilistic pastiche.
If it is a work in progress, why does MCP feel bigger than it is?
MCP does not solve an urgent, widespread problem; most organizations today are still wrestling with unversioned APIs, missing documentation, and brittle integrations. Nor is it that the protocol is battle-hardened. Early implementations have surfaced issues with authentication, identity ambiguity, prompt injection, and tool poisoning. (Sam Julien has a nice summary of associated issues.) It isn't even the first attempt to define an agent-API bridge. As a new Gartner report points out (paywall, of course), there are Cisco's AGNTCY Agent Connect Protocol (ACP) and Agent Gateway Protocol (AGP) proposals, Wildcard's agent.json, LangChain's agent-protocol, and Google's A2A.
But there's something else happening every time I open up my LinkedIn feed. It's not the excitement over MCP as a solution. It's the performance of relevance, something the API community, like much of tech, is increasingly skilled at. MCP has become a vessel not just for experimentation but for status, visibility, and influence. It is driven by several powerful dynamics converging at once:
MCP serves as an API on-ramp for the AI bandwagon
MCP gives long-time API experts a way to participate in the AI boom without switching domains. By reframing agent integration as "the next must-have API skill," it offers seasoned practitioners a way to stake a claim in the future that investors are aggressively chasing and funding.
Clout moves faster than craft
There's more reward for being early than for being right. You don't need a working implementation; you just need a hot take, a screenshot, a memeable post. The attention economy favors the shallow end of the pool. Nobody remembers if you were wrong. But if you gamble and are right, that clout can be spent on future speaking, writing, and consulting opportunities!
The LinkedIn debate-a-tron 3000 loves the froth
Relatedly, much of the current excitement is algorithmically amplified. It's not that MCP is necessarily more important than other efforts - it's just more resonant within the cadence of posts, likes, newsletters, and conference topic proposals. And if somebody disagrees with a half-baked take, the algorithm interprets that as "engagement" and promotes it further!
A promise of solving a people problem with technology
The problem MCP purports to solve - semantics in API descriptions - is real. However, the solution it offers is architectural, not organizational. It treats an absence of human effort (writing clear, purposeful documentation) as something we can fix by introducing another layer of machine-readable expectations. Developing people capable of contributing rich semantic meaning lies outside most technology leader's repertoire. But adding another geegaw to the tech stack is familiar, budgetable, and easier to justify than investing in the slow, patient work of growing organizational knowledge and discipline. And buying or building tech feels like action to stakeholders.
Speed is mistaken for credibility
The sheer momentum of tooling adoption (Postman, Kong, Cloudflare, Kestra, etc.) creates a sense that something must be real because it's moving fast. But fast isn't the same as foundational. And tooling inertia isn't the same as consensus or long-term viability.
You don't need to believe that these dynamics are sinister to recognize their impact. Together, they create a distortion field where technical nuance gets flattened into digestible narrative; one where measured critique is drowned out by performative consensus.
And when that happens, we risk mistaking hype for progress, novelty for value, and noise for signal.
Who actually benefits from MCP?
But just pointing at the distortion field and saying, "This is broken," isn't enough. To navigate what's happening, we must look harder and ask, "Who is positioned to win from this?" and "Who gets diminished in the process?" Things are changing, but success is not evenly distributed.
Winner: The Foundation Model Vendors
Anthropic, OpenAI, and their peers are the primary beneficiaries of MCP adoption. The more APIs are exposed in machine-readable ways, the more surface area models have to act upon, making them seem more powerful, capable, and sticky. In that sense, MCP is less about empowering API providers and more about enriching the agentic platforms that sit on top of them.
The AI vendors get more usable actions with less manual integration overhead. They win by becoming more functional with less effort.
Winner: The Emerging AI Tooling Ecosystem
Companies offering agent hosting, MCP server frameworks, agent orchestration platforms, and authentication overlays are racing to position themselves as critical middleware in this new stack. MCP gives these startups a narrative: "Your AI agents can't operate in the real world without us!" It’s a land grab for future relevance and eventual acquisition.
Future Cloudy: The API Providers
At first glance, MCP sounds empowering for API providers: "Your API could be called by autonomous agents!" But without strong differentiation, APIs simply become interchangeable commodities: a faceless service endpoint among many.
Unless an API offers something unique, it risks being reduced to background noise, accessed (if at all) through the lens of whatever LLM is orchestrating the interaction. More visibility, but less pricing power.
How to stay oriented in a hype cycle.
I started Net API Notes ten years ago to document my struggle to understand the API space. Two hundred and fifty issues later, I feel comfortable saying that this pattern isn't unique to MCP. Time and time again, I've seen that this is how platform evolution plays out:
- Early promises focus on openness, interoperability, and empowerment.
- Over time, the gravity shifts toward consolidation, aggregation, and lock-in.
MCP's rise just accelerates the transition from direct integration (where users and developers control the interaction) to intermediated access (where AI models broker the interaction on your behalf).
The further you are from owning the agent's point of decision-making, the less control - and ultimately, the less value - you can capture.
Understanding who benefits isn’t just an academic exercise; it’s a survival skill.
If you're an API strategist, platform owner, or developer today, you need to be clear-eyed about what you're building into, and who sets the rules of engagement tomorrow. MCP may offer new opportunities, but it also reshuffles who owns the customer relationship, who captures the value, and who gets reduced to a replaceable part. What feels empowering now can later become limiting.
When the next new protocol, platform, or AI capability appears, ask yourself three simple questions:
- Who is making this easier, and for whom?
- Where does control consolidate as a result?
- Am I building toward more leverage or giving it away?
One of the few durable advantages we have is learning to ask these questions early, before the tooling, the hype, and the consensus is set.
Here's to 250 issues of trying to see things for what they are. And the hope for remaining clear-eyed for the next 250.
Milestones
The US National Institute of Standards and Technology has released "Guidelines for API Protection for Cloud-Native Systems". They are asking for feedback and comments are open until May 12th.
Wrapping Up
The topic of Agentic AI and technology hyperbole has been on my mind for some time. If you're interested in reading more of what I have to say on both topics, check out my presentation from last year, Surviving the TurboEncabulator, and Ironies of Agentic AI.
Also, it's hard to believe there have been 250 editions of this newsletter; perhaps that is a nothing-burger to outlets that publish daily or even weekly. However, for a niche interest where I try to remain evergreen, it feels pretty notable (but I'm biased).
Thank you to those who have been subscribers from the beginning. I also want to thank those who discovered me later for taking a chance and jumping on board. I can't promise that the advice in these emails will make you rich, but I can guarantee that it will be unique.
Till next time,
Matthew
(@matthew in the fediverse and matthewreinbold.com on the web)