MCP vs. API Explained
Model Context Protocol (MCP) standardizes AI integration with external tools, simplifying development, enabling dynamic discovery, and facilitating real-time communication, while traditional APIs may still be preferred for precise control.
Read original articleModel Context Protocol (MCP) is an open protocol designed to standardize the way applications provide context to Large Language Models (LLMs). It serves as a uniform method for connecting AI systems to various tools and data sources, akin to a USB-C port for AI applications. MCP simplifies AI integrations by allowing a single standardized integration, as opposed to the multiple individual API integrations required by traditional methods. This reduces the complexity of development, as developers no longer need to write custom code for each service, manage separate documentation, or handle varied authentication methods. MCP supports dynamic discovery of tools, enabling AI models to interact with available resources without hard-coded knowledge. It also facilitates real-time, two-way communication, allowing AI models to both retrieve information and trigger actions. The architecture of MCP consists of clients that connect to servers, which expose functionalities and connect to local or remote data sources. While MCP offers significant advantages in flexibility, scalability, and real-time responsiveness, traditional APIs may still be preferable for use cases requiring precise control and predictability. Overall, MCP represents a significant advancement in the integration of AI agents with external data and tools, promoting more intelligent and context-aware applications.
- MCP standardizes connections between AI agents and external tools, simplifying integration.
- It allows for dynamic discovery and real-time communication, enhancing AI capabilities.
- MCP reduces development complexity by eliminating the need for multiple API integrations.
- Traditional APIs may still be better for applications requiring strict control and predictability.
- MCP supports scalability, enabling easy addition of new capabilities as AI ecosystems grow.
Related
Introducing The Model Context Protocol
Anthropic has open-sourced the Model Context Protocol (MCP) to enhance AI assistants' integration with data systems, improving response relevance and enabling developers to create secure connections and build connectors.
Anthropic proposes a new way to talk with chatbots
Anthropic has launched the Model Context Protocol (MCP), an open-source standard to connect AI chatbots with data sources, enhancing response relevance and simplifying development, despite competition from OpenAI.
Reflections on building with Model Context Protocol
The Model Context Protocol (MCP) by Anthropic improves LLM interactions but has limitations in Claude Desktop. The TypeScript SDK is effective, while the Python SDK has issues. Future enhancements are needed.
Show HN: Anthropic's MCP Server Directory
The Model Context Protocol (MCP) by Anthropic enables AI models to interact with resources through standardized servers, featuring 129 servers for various functionalities, primarily supported in Claude's desktop client.
Model Context Protocol (MCP)
The Model Context Protocol (MCP) standardizes AI tool integration, enabling applications to access external resources and perform complex tasks through a client-server model, enhancing functionality with tools like iMCP and hype.
But the comparison with HTTP is not a very good one, because MCP is stateful and complex. MCP is actually much more similar to FTP than it is to HTTP.
I wrote 2 short blog posts about this in case anyone is curious: https://www.ondr.sh/blog/thoughts-on-mcp
If you are building your own applications, you can simply use "Tools APIs" provided by the LLM directly (e,.g. https://platform.openai.com/docs/assistants/tools).
MCP is not something most people need to bother with unless you are building an application that needs extension or you are trying to extend an application (like those I listed above). Under the hood the MCP is just an interface into the tools API.
2) Is this meaningfully different from just having every API provide a JavaScript SDK to access it, and then having the model write code? That's how humans solve this stuff.
3) If the AI is actually as smart at doing tasks like writing clients for APIs as people like to claim, why does it need this to be made machine readable in the first place?
So if you are here for MCP, I will use the opportunity to share what I've been working on the last few months.
I've hand curated hundreds of MCP servers, which people can access and browse via https://glama.ai/mcp/servers and made those servers available via API https://glama.ai/mcp/reference
The API allows to search for MCP servers, identify their capabilities via API attributes, and even access user hosted MCP servers.
However, you can also try these servers using an inspector (available under every server) and also in the chat (https://glama.ai/chat)
This is all part of a bigger ambition to create an all encompassing platform for authoring, discovering and hosting MCP servers.
I am also the author of https://github.com/punkpeye/fastmcp framework and several other supporting open-source tools, like https://github.com/punkpeye/mcp-proxy
If you are also interested in MCP and want to chat about the future of this technology, drop me a message.
MCP reminds me of a new platform opportunity akin to the Apple App Store.
It's rapidly adopted, with offerings from GitHub, Stripe, Slack, Google Maps, AirTable, etc. Many more non-official integrations are already out there. I expect this will only gain adoption over the coming year.
MCP is probably easier for clients to implement but suffers from poor standardization, immaturity and non-human readability. It clearly scratches an itch but I think it’s a local-minimum that requires a tremendous amount of work to implement.
So all that's needed are API docs. Or what am I missing?
The value of MCP then depends on it's adoption. If I need to write an MCP adapter for everything, it's value is little. If everyone (API owners, OS, Clouds, ...) puts in the work to have an MCP compatible interface it's valuable.
In a world where I need to build my own X-to-USB dongle for every device myself, I wouldn't use USB, to stay with the articles analogy.
Normally, LSP when running on a remote server, you would use a continuous (web)socket instead of API requests. This helps with the parsing overhead and provides faster response for small requests. Also requests have cancellation tokens, which makes it possible to cancel a request when it became unnecessary.
While similar to MCP, ANP is significantly different. ANP is specifically designed for agents, addressing communication issues encountered by intelligent agents. It enables identity authentication and collaboration between any two agents.
Key differences include:
ANP uses a P2P architecture, whereas MCP follows a client-server model. ANP relies on W3C DID for decentralized identity authentication, while MCP utilizes OAuth. ANP organizes information using Semantic Web and Linked Data principles, whereas MCP employs JSON-RPC. MCP might excel at providing additional information and tools to models and connecting models to the existing web. In contrast, ANP is particularly effective for collaboration and communication between agents.
Here is a detailed comparison of ANP and MCP (including the GitHub repository): https://github.com/agent-network-protocol/AgentNetworkProtoc...
- slack or comment to linear/Jira with a summary of what I pushed
- pull this issue from sentry and fix it - pull this linear issue and do a first pass
- pull in this Notion doc with a PRD then create an API reference for it based on this codebase, then create a new Notion page with the reference
MCP tools are what the LLM uses and initiates
MCP prompts are user initated workflows
MCP resources is the data that the APIs provide and structure of that data (because porting APIs to MCPs are not as straight forward) Anyways please give me feedback!
I've played a lot with the FileSystem MCP server but couldn't get it to do something useful that I can't already do faster on my own. For instance, asking it how many files have word "main" in it. It returns 267, but in reality there are 12k.
Looks promising, but I am still looking for useful ways to integrate it into my workflow.
Regular SDK lib: - Integration Effort: just like MCP - Real-Time Communication - Sure - Dynamic Discovery - obviously. just call refresh or whatever - Scalability - infinite, it is a library - Security & Control - just like mcp
i trully don't get it
Following those two principles means your implementation ends up as simple class, with simple methods, with simple params - possibly using decorators to expose it as rpc and perform runtime type assertion for params (exposing rpc, server side) and result (using rpc, client side) – consuming jsonrpc now looks like using any ordinary library/package that happens to have async methods (this is important, there is no special dialect of communication, it's all ordinary semantics everybody is already used to, your code on client and server side doesn't jump between mapping to/from language and jsonrpc, there is a lot of complexity that's collapsed, code looks minimal, it's small, natural to read etc).
Notifications also map naturally to well established pattern (ie. event emitter in nodejs).
And yes, that's my main criticism of MCP – you're making standard for communication meant to be used from different languages, why adding this silly, unnecessary complexity by using "/" in method names? It frankly feels like amateur mistake by somebody who thinks it should be a bit like REST where method is URL path.
Another tangent – this declaration of available enpoints is unnecessarily complicated – you can just use url: file://.. scheme to start process on that executable with stdin/stdout as communication channels (this idea is great btw, good job!), ws:// or wss:// for websocket comms to existing service and http:// or https:// for jsonrpc over http (no notifications).
Ok but why would every app and website implement this new protocol for the benefit of LLMs/agents?
Did they just now discover abstract base classes?
The only thing that idea ever lead to was more (complicated) APIs.
Related
Introducing The Model Context Protocol
Anthropic has open-sourced the Model Context Protocol (MCP) to enhance AI assistants' integration with data systems, improving response relevance and enabling developers to create secure connections and build connectors.
Anthropic proposes a new way to talk with chatbots
Anthropic has launched the Model Context Protocol (MCP), an open-source standard to connect AI chatbots with data sources, enhancing response relevance and simplifying development, despite competition from OpenAI.
Reflections on building with Model Context Protocol
The Model Context Protocol (MCP) by Anthropic improves LLM interactions but has limitations in Claude Desktop. The TypeScript SDK is effective, while the Python SDK has issues. Future enhancements are needed.
Show HN: Anthropic's MCP Server Directory
The Model Context Protocol (MCP) by Anthropic enables AI models to interact with resources through standardized servers, featuring 129 servers for various functionalities, primarily supported in Claude's desktop client.
Model Context Protocol (MCP)
The Model Context Protocol (MCP) standardizes AI tool integration, enabling applications to access external resources and perform complex tasks through a client-server model, enhancing functionality with tools like iMCP and hype.