What Is MCP? The Protocol Transforming AI Agents and LLM Integration

May 2, 2025

6 MIN. READ

Written by Chaz Englander

I’m not planning to become an educator, but people should really pay attention to this MCP (Model Context Protocol) chat—it's mind-bending. LLMs on their own aren't all that useful, which is why agents have exploded in popularity.

Agents are essentially LLMs with access to external tools.The problem with agentic systems that no one talks about is that as you layer on more capabilities, they become incredibly complex to maintain and often fail to work cohesively, leading to unreliable and brittle systems at scale. We should know; we've built arguably one of the most advanced agentic systems in the world.

Right now, each service provider speaks a different language as their APIs use different protocols, structures, request formats, etc. Plus, you have to constantly update each agent to keep up with new developments from each service provider, which is a huge hassle. This is where MCP comes in. MCP acts as a standardized layer between the LLM and external services, translating and simplifying communication.

So, how does the MCP ecosystem work, and who builds MCP? Anthropic (the inventors) set the standard, and it's up to service providers to build and maintain their own MCP integrations. If they don't, it makes their services difficult for LLMs to communicate with. However, some providers might argue they already have APIs and may hesitate to maintain their own MCP servers. This is where the open-source community comes in, developing and maintaining open-source MCP implementations, which is an exciting and powerful aspect of MCP.

Twelve months ago, it seemed almost inevitable that we needed a new protocol, but it was super unclear who would set the initial protocol and how it would become widely adopted, if at all. Two weeks ago, Sam Altman posted on X that "OpenAI people love MCP and we are excited to add support across our products" (which triggered this post). OpenAI enabling their Agents SDK to plug into MCP (alongside ChatGPT) makes it clear that MCP is poised to be the protocol of the future. Why is this so important?

Prior to MCP, if you wanted an LLM to work with an external tool (e.g. send an email, fetch a Slack message, perform a Google search, access a database), you had to build and maintain integrations for each one. Now, because MCP integrations standardize communication, adding new tools and maintaining them becomes infinitely simpler and far more robust. This means that soon you'll be able to interact with your entire digital ecosystem through a single application—retrieving information & performing actions.

For example: "Hey LLM, order the ingredients for a green Thai curry tonight." The LLM could first search (via Perplexity's MCP integration) for the list of ingredients, then place the order and track it for you (via UberEats' MCP integration).

The incredible thing is that once these MCP integrations exist, you could realistically build the above example in a matter of hours from scratch.

To see the full LinkedIn post, click this link.

https://www.linkedin.com/posts/activity-7315709034366402560-2BmB

Written by Chaz Englander

I’m not planning to become an educator, but people should really pay attention to this MCP (Model Context Protocol) chat—it's mind-bending. LLMs on their own aren't all that useful, which is why agents have exploded in popularity.

Agents are essentially LLMs with access to external tools.The problem with agentic systems that no one talks about is that as you layer on more capabilities, they become incredibly complex to maintain and often fail to work cohesively, leading to unreliable and brittle systems at scale. We should know; we've built arguably one of the most advanced agentic systems in the world.

Right now, each service provider speaks a different language as their APIs use different protocols, structures, request formats, etc. Plus, you have to constantly update each agent to keep up with new developments from each service provider, which is a huge hassle. This is where MCP comes in. MCP acts as a standardized layer between the LLM and external services, translating and simplifying communication.

So, how does the MCP ecosystem work, and who builds MCP? Anthropic (the inventors) set the standard, and it's up to service providers to build and maintain their own MCP integrations. If they don't, it makes their services difficult for LLMs to communicate with. However, some providers might argue they already have APIs and may hesitate to maintain their own MCP servers. This is where the open-source community comes in, developing and maintaining open-source MCP implementations, which is an exciting and powerful aspect of MCP.

Twelve months ago, it seemed almost inevitable that we needed a new protocol, but it was super unclear who would set the initial protocol and how it would become widely adopted, if at all. Two weeks ago, Sam Altman posted on X that "OpenAI people love MCP and we are excited to add support across our products" (which triggered this post). OpenAI enabling their Agents SDK to plug into MCP (alongside ChatGPT) makes it clear that MCP is poised to be the protocol of the future. Why is this so important?

Prior to MCP, if you wanted an LLM to work with an external tool (e.g. send an email, fetch a Slack message, perform a Google search, access a database), you had to build and maintain integrations for each one. Now, because MCP integrations standardize communication, adding new tools and maintaining them becomes infinitely simpler and far more robust. This means that soon you'll be able to interact with your entire digital ecosystem through a single application—retrieving information & performing actions.

For example: "Hey LLM, order the ingredients for a green Thai curry tonight." The LLM could first search (via Perplexity's MCP integration) for the list of ingredients, then place the order and track it for you (via UberEats' MCP integration).

The incredible thing is that once these MCP integrations exist, you could realistically build the above example in a matter of hours from scratch.

To see the full LinkedIn post, click this link.

https://www.linkedin.com/posts/activity-7315709034366402560-2BmB

Written by Chaz Englander

I’m not planning to become an educator, but people should really pay attention to this MCP (Model Context Protocol) chat—it's mind-bending. LLMs on their own aren't all that useful, which is why agents have exploded in popularity.

Agents are essentially LLMs with access to external tools.The problem with agentic systems that no one talks about is that as you layer on more capabilities, they become incredibly complex to maintain and often fail to work cohesively, leading to unreliable and brittle systems at scale. We should know; we've built arguably one of the most advanced agentic systems in the world.

Right now, each service provider speaks a different language as their APIs use different protocols, structures, request formats, etc. Plus, you have to constantly update each agent to keep up with new developments from each service provider, which is a huge hassle. This is where MCP comes in. MCP acts as a standardized layer between the LLM and external services, translating and simplifying communication.

So, how does the MCP ecosystem work, and who builds MCP? Anthropic (the inventors) set the standard, and it's up to service providers to build and maintain their own MCP integrations. If they don't, it makes their services difficult for LLMs to communicate with. However, some providers might argue they already have APIs and may hesitate to maintain their own MCP servers. This is where the open-source community comes in, developing and maintaining open-source MCP implementations, which is an exciting and powerful aspect of MCP.

Twelve months ago, it seemed almost inevitable that we needed a new protocol, but it was super unclear who would set the initial protocol and how it would become widely adopted, if at all. Two weeks ago, Sam Altman posted on X that "OpenAI people love MCP and we are excited to add support across our products" (which triggered this post). OpenAI enabling their Agents SDK to plug into MCP (alongside ChatGPT) makes it clear that MCP is poised to be the protocol of the future. Why is this so important?

Prior to MCP, if you wanted an LLM to work with an external tool (e.g. send an email, fetch a Slack message, perform a Google search, access a database), you had to build and maintain integrations for each one. Now, because MCP integrations standardize communication, adding new tools and maintaining them becomes infinitely simpler and far more robust. This means that soon you'll be able to interact with your entire digital ecosystem through a single application—retrieving information & performing actions.

For example: "Hey LLM, order the ingredients for a green Thai curry tonight." The LLM could first search (via Perplexity's MCP integration) for the list of ingredients, then place the order and track it for you (via UberEats' MCP integration).

The incredible thing is that once these MCP integrations exist, you could realistically build the above example in a matter of hours from scratch.

To see the full LinkedIn post, click this link.

https://www.linkedin.com/posts/activity-7315709034366402560-2BmB

OFFICE LOCATIONS

New York

44 West 37th Street, New York

San Francisco

2261 Market Street, San Francisco

London

The Fjord Building, Kings Cross, London

Hong Kong

28 Stanley St Central, Hong Kong

OFFICE LOCATIONS

New York

44 West 37th Street, New York

San Francisco

2261 Market Street, San Francisco

London

The Fjord Building, Kings Cross, London

Hong Kong

28 Stanley St Central, Hong Kong

OFFICE LOCATIONS

New York

44 West 37th Street, New York

San Francisco

2261 Market Street, San Francisco

London

The Fjord Building, Kings Cross, London

Hong Kong

28 Stanley St Central, Hong Kong