The announcement that did not look important#
On 25 November 2024, Anthropic announced the Model Context Protocol. The post was technical, the demo was understated, and the immediate reception was a polite "cool, a protocol". A few people built weekend projects, Hacker News had a reasonable thread, and it did not feel like a major shift.
Seventeen months later, MCP is one of the most consequential pieces of AI infrastructure that shipped in 2024. Both of Anthropic's major competitors have adopted it. It is supported across Claude, ChatGPT, Gemini, Cursor, Windsurf, and several hundred other clients. The official registry lists around 500 servers and growing. The spec is on its fifth major version.
None of this was obvious at launch. The pattern of who adopted it, when, and why tells a specific story about where AI value is being captured, and what developers should pay attention to. This post is about that pattern.
What MCP actually is#
MCP is a protocol that lets an AI assistant like Claude communicate with external tools and data sources in a standardised way. Before MCP, every integration was custom. You wanted Claude to read your Gmail, you wrote custom Gmail glue. You wanted it to query your database, you wrote custom database glue. Every tool had its own authentication story, its own calling convention, its own error handling.
MCP defined a common interface. An MCP server exposes tools (functions the model can call) and resources (data the model can read) over a standard transport. An MCP client knows how to talk to any MCP server without custom integration work.
The architecture analogy is HTTP. HTTP did not add new capabilities to computers — it standardised how they talked. Once it existed, the cost of integration collapsed, and the total amount of integration exploded. MCP is doing the same thing for AI tool use. The fact that it is boring is part of why it is load-bearing.
The current specification, version 2025-11-25, adds Streamable HTTP Transport, an MCP Tasks extension (SEP-1686), Triggers, and mandatory OAuth 2.1 with PKCE for clients.
The adoption timeline#
The sequence of who signed on matters, because it shows the protocol passing each credibility test in turn.
November 2024: Anthropic announces MCP and releases reference implementations. Initial client is Claude Desktop.
March 2025: OpenAI officially adopts MCP. Sam Altman's public announcement, covered by TechCrunch, 26 March 2025. This was the moment the protocol moved from "vendor-specific thing" to "emerging standard".
April 2025: Google announces MCP adoption for Gemini. TechCrunch coverage, 9 April 2025. Confirmed later with official Google Cloud support.
October 2025: OpenAI ships full MCP support in ChatGPT Developer Mode, including Agents SDK, Responses API, and Desktop. InfoQ coverage.
November 2025: Spec update 2025-11-25 ships with Streamable HTTP, OAuth 2.1, and the Tasks extension.
By the time the major model vendors all supported the same protocol, it was effectively the default. The acceleration from initial announcement to universal adoption was faster than most infrastructure protocols take. USB took years. HTTP took years. MCP took about five months from Anthropic's launch to OpenAI signing on, and nine months to full ChatGPT integration.
The registry and the ecosystem#
The official MCP registry moved from preview in September 2025 to a production registry listing around 518 servers as of February 2026. The reference implementations repo has 84,100 GitHub stars and hosts seven actively maintained reference servers plus thirteen archived ones.
Beyond the official registry, third-party directories (Glama is the largest) list tens of thousands of community-built MCP servers. The total ecosystem is well north of 10,000 servers in 2026. Not all of them are serious, but the volume shows what happens when integration cost collapses.
The client side is broader. As of early 2026, MCP is supported in: Claude Desktop and Claude Code (reference), Cursor, Windsurf, Zed, VS Code via GitHub Copilot, Cline, Replit, Continue.dev, Sourcegraph Cody, Gemini CLI, and ChatGPT Desktop Developer Mode. The Fast.io MCP client directory lists over 300 clients in various states of support.
What got enabled that did not exist before#
The interesting thing is not the protocol itself — the protocol is fine but not exceptional. The interesting thing is what the protocol made possible that did not exist before.
Cross-client compatibility. A server you write for Claude Desktop also works in Claude Code, Cursor, Windsurf, and Zed. Write once, run wherever. Before MCP, each integration was tied to a specific AI product.
Low-friction personal context. Local MCP servers that run on your own machine and expose your files, notes, calendar, or private data to your AI client without going through a cloud service. This is the use case that changed my own day-to-day AI use the most. My Claude Desktop knows about my work in a way the generic product never could, because MCP lets me wire in servers for my file system, Drive, Gmail, and custom tools, all running locally.
Enterprise integrations without vendor lock. Large companies that would not expose data through OpenAI plugins were willing to write MCP servers because the protocol is open, the transport is standard, and the integration runs on their infrastructure. This unlocked a class of use cases that simply could not happen in a vendor-plugin-ecosystem architecture.
Composability. When you have twenty MCP servers installed, your AI can chain them. Read this email, extract the invoice, file it, tag it, update the ledger. Each step is trivial on its own. The composition is what makes the productivity gain real.
What MCP still gets wrong#
The protocol is not finished. The honest gaps:
Authentication is underspecified at the protocol level. The spec says OAuth-style flows are supported, and the 2025-11-25 update made OAuth 2.1 with PKCE mandatory for clients, but backend authentication is explicitly out-of-scope. In practice, getting authentication right across different servers is still painful. Some want API keys, some want OAuth, some want bearer tokens in environment variables. The 2026 MCP roadmap acknowledges this and makes enterprise-readiness a focus.
Discoverability is weak. The official registry is a list. There is no canonical way to find "servers that do X" without reading descriptions. Community directories fill the gap imperfectly.
Enterprise gaps. Audit trails, SSO integration, gateway patterns, and configuration portability are all identified by the 2026 roadmap as needed enterprise features that the current spec does not cover. The WorkOS analysis of the roadmap is a good summary of where the gaps are.
Statefulness is awkward. MCP is fundamentally request-response. For workflows with multi-step state, state has to be encoded in parameters or returned in responses. This works but is not elegant. The Tasks extension (SEP-1686) is a first attempt to address this.
What the adoption pattern tells you#
The most interesting thing about MCP is not any individual feature. It is that Anthropic shipped a standard and open-sourced it, and within months the two major competitors adopted it. That does not happen by accident. It happens when the protocol is obviously needed, when shipping is fast enough to beat alternative proposals, and when the author does not try to lock the protocol to their own products.
For anyone building AI-adjacent products in 2026:
- MCP support is now table stakes for developer-facing AI tools. Not having it is a conspicuous absence.
- Integrations that used to be their own products are becoming MCP servers. If you are selling a product whose primary value was "we connect X to ChatGPT", that category is commoditising.
- The leverage is now in composition, not individual integrations. The value is in what your product lets AI do with several MCP servers working together, not in the single server itself.
- Authentication is the competitive edge. Whoever solves the cross-server auth story well captures the enterprise MCP market, because everyone else has to solve it per-server.
MCP is an infrastructure protocol. Infrastructure protocols are boring, and the boring ones often matter most. The fact that the major labs all adopted it within five months is the signal. Everything else is implementation detail.
Further reading#
- The Quiet Death of AI Agents for the context on why narrow, tool-using agents are the shape that works.
- Vibe Coding Is a Lie for the related observation that AI tools work best when carefully integrated.
- Cursor to Claude Code and Back for how the actual coding tools compare.
Sources#
- Anthropic MCP announcement, 25 Nov 2024: https://www.anthropic.com/news/model-context-protocol
- MCP Specification (current, 2025-11-25): https://modelcontextprotocol.io/specification
- Official MCP Registry: https://registry.modelcontextprotocol.io/
- Reference servers repo: https://github.com/modelcontextprotocol/servers
- OpenAI adopts MCP, TechCrunch, 26 March 2025: https://techcrunch.com/2025/03/26/openai-adopts-rival-anthropics-standard-for-connecting-ai-models-to-data/
- Google adopts MCP, TechCrunch, 9 April 2025: https://techcrunch.com/2025/04/09/google-says-itll-embrace-anthropics-standard-for-connecting-ai-models-to-data/
- Google Cloud official MCP support: https://cloud.google.com/blog/products/ai-machine-learning/announcing-official-mcp-support-for-google-services
- ChatGPT MCP Developer Mode, InfoQ, Oct 2025: https://www.infoq.com/news/2025/10/chat-gpt-mcp/
- 2026 MCP roadmap: https://blog.modelcontextprotocol.io/posts/2026-mcp-roadmap/
- WorkOS analysis of MCP enterprise readiness: https://workos.com/blog/2026-mcp-roadmap-enterprise-readiness
- MCP client directory: https://fast.io/resources/best-mcp-clients-developers/
