Besides OpenAI’s GPT, barely any technology had such a ripple effect on the LLM ecosystem as Anthropic’s Model Context Protocol, or MCP.
At the time of writing, every week, 6.7 million users download the TypeScript MCP SDK, and over 9 million developers download the MCP Python SDK. The GitHub topic ‘model-context-protocol’ lists over 1,100 repositories. There are over 16k active MCP servers, and new ones are created every day.
All leading LLMs, IDEs, and agent-to-agent communication platforms added MCP support. Cloud providers, Azure and AWS, rolled out services that enable building MCP workflows.
All this momentum makes MCP look like it could become the go-to standard for enterprise AI systems.
But just because a technology is popular doesn’t mean it’s ready for enterprise use. Companies need to think carefully about whether it’s actually production-ready, secure enough, and can scale properly.
In this post, we are going to examine how enterprise organizations in finance, media, and tech are building scalable MCP applications.
We will shed light on the shortcomings of the Model Context Protocol that complicate its enterprise adoption and explore the solutions to these problems.
How MCP took over AI protocols
When MCP arrived in late 2024 (and went viral in early 2025), engineers already had workarounds that allowed AI agents to call tools.
LangChain and LangGraph help accomplish the same purpose. OpenAPI is the older implementation of the same principle.
But MCP brought something different to the table. Instead of just describing how to call a tool, it handles the entire process, from connecting to the tool, running commands, and bringing the results back into your AI agent’s context.

The developer community has embraced MCP quickly, though it’s still catching up to more established frameworks in terms of overall adoption numbers.
Why MCP is a big deal for AI agents
The goal of MCP is to connect agents with any third-party tool or data.
This means your AI agent can pull data from spreadsheets, access cloud databases, or interact with web APIs without you having to build custom integrations for each one.
Understanding MCP architecture
MCP connects AI agents to tools, services, and documents by bridging three key components: Clients, servers, and data sources.

- MCP clients help AI assistants (e.g., Claude) get through to MCP servers. When Claude or Cursor needs to access a spreadsheet or the IDE, they use MCP clients to connect with tools and documents.
- Tool-specific MCP servers transform LLM requests into commands that a third-party app or data source can read. MCP servers also redirect agents to appropriate applications (tool discovery), run commands, format app responses in an LLM-understandable way, and manage errors.
- Services are the applications or data sources that MCP servers access. They can be both local files on a user’s device or remote cloud databases, web APIs, or SaaS platforms. An MCP server ensures secure and error-free access to a specific service.
The protocol itself defines how the client and servers communicate, interact with services, and communicate results. It uses structured formats (mainly JSON) to keep outputs clean and consistent.
How MCP differs from traditional APIs
Conceptually, Model Context Protocol and APIs are complementary, not mutually exclusive.
An API is a descriptive standard that contains instructions to call a tool.
MCP is an execution standard that lets AI both call the tool and retrieve its data.
Where REST APIs operate via stateless request/response messages, MCP retains session context. It can query or extract data and add it directly to an LLM’s context window.
Other important differences between MCP and traditional APIs are summarized in this table.

Ultimately, it’s more accurate to consider MCP as an adapter that facilitates the orchestration of all types of APIs.
In fact, there’s a growing number of tools that autogenerate MCP connectors from OpenAPIs.
Where MCP wins over LangChain/LangGraph
In 2023, orchestrators were groundbreaking because they helped create multi-step agentic workflows. These frameworks let LLMs search the web, run code, and access system files to look for answers.
But engineers still had to build ad-hoc integrations for every tool AI agents need access to.
Each integration has a tool-specific implementation: some would run via a Python wrapper, others would require JSON outputs.
MCP solved this problem by creating a uniform way for LangChain, LangGraph, and other orchestrators to plug into third-party tools.
Like with APIs, developers can use MCP as both an alternative and an add-on to orchestrators. It’s unlikely that Model Context Protocol will replace LangChain and LangGraph in multi-agent systems. Orchestrators are still helpful in writing the logic of AI agents, and MCP has no such capabilities.
MCP’s promise to “unify and simplify” tool calling can be as groundbreaking as OpenAPI was back in the early days of the API ecosystem or HTTP was in the infancy of the Internet.
To explore the practical value this technology delivers in the enterprise, let’s take a look at the way global teams deploy MCP-enabled agents at scale.
How enterprises are building AI agents with MCP
Although MCP is still an experimental technology and, as we will discuss later on, a security minefield, enterprises are finding ways to deploy it and create agentic workflows that drive business impact.
Three real-world examples of MCP adoption at large enterprises make it clear that MCP-enabled agents are powerful productivity enhancers.
FinTech: Block’s internal AI agent
Block, a global FinTech company behind Square and Cash App, has built an internal AI agent called Goose that runs on MCP architecture. The agent works as both a desktop application and a command-line tool, giving their engineers access to various MCP servers.
What’s interesting about Block’s approach is that they’ve built all their MCP servers in-house rather than using third-party ones. This gives them complete control over security and lets them customize integrations for their specific workflows.
Angie Jones, VP of Engineering at Block, shared a few popular MCP use cases at Block.
- In engineering, MCP tools help refactor legacy software, migrate databases, run unit tests, and automate repetitive coding tasks.
- Design, product, and customer support teams use MCP-powered Goose to generate documentation, process tickets, and build prototypes.
- Data teams rely on MCP to connect with internal systems and get extra context from internal sources.
Block integrated MCP with the company’s go-to engineering and project management tools: Snowflake, Jira, Slack, Google Drive, and internal task-specific APIs.
Business impact: Thousands of Block’s employees use Goose and cut up to 75% of the time spent on daily engineering tasks.
Media: Bloomberg’s development acceleration
At the MCP Developer Summit, Sabhav Kothari, Head of AI Productivity at Bloomberg, focused on how his team utilizes MCP internally to help AI developers reduce the time required to ship demos into production.
Kothari’s engineering team hypothesized that a system enabling AI agents to interact with the company’s entire infrastructure would facilitate shorter feedback loops and accelerate development. In early 2024, they built an MCP-like protocol internally.
After carefully following MCP adoption, Bloomberg engineers decided to adopt the protocol as an organization-wide standard.

“From day one, we closely followed MCP’s progress because we realized this protocol had the same semantic mapping as our internal approach, but it was being built in the open. We quickly recognized that MCP had that same potential”.
Sabhav Kothari, Head of AI Productivity at Bloomberg
Business impact: MCP adoption helped Bloomberg engineers bridge the product development gap and deploy agents faster. The protocol connects AI researchers to an ever-growing toolset. It reduced time-to-production from days to minutes and created a flywheel where all tools and agents interact and reinforce one another.
E-commerce: Amazon’s API–first advantage
In one of The Pragmatic Engineer’s editions, Gergely Orosz talks about Amazon using MCP at scale as part of its API-first culture. Since the mid-2000s, Amazon has required teams to build internal APIs that other teams can use – what they call their “API-first culture.”
This existing API infrastructure has made Amazon a natural fit for MCP adoption. When you already have thousands of internal APIs, adding MCP as a standardized way to connect AI agents to those APIs makes a lot of sense.
Orosz quotes an Amazon SDE saying that “most internal tools already added MCP support”. Now, Amazon employees can create agents to review tickets, reply to emails, process the internal wiki, and use the command-line interface.
Business impact: According to an Amazon engineer mentioned in the newsletter, the MCP integration with Q CLI is gaining popularity internally, and developers are now automating tedious tasks.
Despite enterprises successfully deploying agentic workflows with MCP, the machine learning community is raising concerns about the protocol’s security and architecture shortcomings.
Challenges of adopting MCP at scale for enterprises
While those early success stories sound promising, many enterprise engineers are still cautious about rolling out MCP more broadly. The technology is relatively new, and there’s always a risk-reward calculation when it comes to adopting emerging technologies at scale.
As a Reddit user points out, taking compliance and security risks for yet unproven productivity benefits is usually not a playbook enterprises play by.
I think a lot of places are exploring MCP and trying to keep up with the tech to ensure their business is competitive. BUT, without a compelling benefit – such as cost savings or generating new business – I fail to see how any company would convert a stable platform to one using MCP at this time.
Reddit user on bottlenecks to MCP adoption
Enterprise organizations are typically unwilling to be the early adopters of emerging technologies. Aside from a few leading-edge adopters like Amazon, most are waiting until the technology either exposes significant vulnerabilities or delivers considerable gains.
Speaking of security, that’s where some of the biggest concerns lie.
Challenge #1: MCP’s authorization is not ‘enterprise-friendly’
Before poking at the vulnerabilities of MCP’s current authorization specification with OAuth, let’s quickly examine the reason Anthropic introduced OAuth specifications in the first place.
Originally, setting up MCP involved a 1:1 deployment of a client and an MCP server on a developer’s local machine. This worked fine for individual developers but didn’t scale to enterprise needs.
Over time, the surge of MCP adoption among smaller projects created a ripple effect in the enterprise. Engineering team leaders were interested in setting up remote MCP servers, but to access data on these servers in privacy-compliant ways, they needed authorization.
Anthropic responded with the first set of authorization specifications, released in March 2025.
First specifications: no separation between authentication and resource servers.
The MCP Authorization spec allowed secure access to servers using OAuth 2.1. Now, engineers could set up the protocol on a remote server, but they had new concerns.

In the specifications, MCP servers were treated as both resource and authorization servers, which went against enterprise best practices, increased fragmentation, and forced developers to expose metadata discovery URLs.
The latest specification: servers are decoupled, but security issues remain.
In June, after months of active discussions on where the first authorization specifications fell short, Anthropic released an updated version that decoupled authorization and resource servers.
Developers were still unhappy. For one, the revised specification leans on OAuth RFCs – a set of frameworks that grant third-party applications limited access to HTTP services, which is not widely used by identity providers.
Anothropic also relies on MCP clients using dynamic client registration that lets anonymous clients register on MCP servers. Not knowing which client is attempting to connect to the server in advance goes against the need for reliability and the strict security that enterprises operate by.
How enterprises solve this problem
To bypass the uncertainty of dynamic client registration, teams build custom tools that test and validate MCP clients.
An open-source example of such a tool is mcp-inspector, a project for testing and debugging MCP servers. When it registers an MCP host, the tool retrieves the metadata, registers the client, and retrieves an OAuth token.
Challenge #2: MCP does not integrate with enterprise SSO systems
Most enterprise environments rely heavily on single sign-on (SSO) systems to control who can access what applications. As Aaron Parecki, one of the co-authors of the OAuth 2.1 spec, explains:
“This enables the company to manage which users are allowed to use which applications and prevents users from needing to have their own passwords at the applications”.
Aaron Palecki, ‘Enterprise-Ready MCP’
The problem is that MCP doesn’t integrate smoothly with these enterprise SSO systems. Parecki argues that MCP-enabled AI agents should be treated like any other enterprise application – controlled through the company’s identity management system.
At the time of writing, connecting an AI agent like Claude to enterprise tools through SSO involves several frustrating steps.
- A user needs to log in to Claude via SSO, access the enterprise IdP, and complete authentication.
- Once authenticated, users need to connect external apps to Claude by clicking a button, get redirected to the IdP, authenticate one more time, get directed back to the app, and accept an OAuth request for access.
- When the user grants appropriate OAuth permissions, they can come back to Claude and use the AI agent.
This authentication by itself is inconvenient for enterprise multi-agent systems that have to connect to a wider range of applications.

More importantly, in this authentication approach, the user is the one granting permissions, with no visibility at the admin level.
This means there’s no one to oversee access control, and there’s a risk of unchecked interaction between mission-critical systems and unvetted third-party applications.
How enterprises solve this problem
Identity solution providers are already developing workarounds to address the limitations of MCP’s authorization.
Okta, one of the leading independent identity vendors, has unveiled Cross-App Access, a protocol that aims to bring visibility and control to MCP-enabled AI agents. It is scheduled for release in Q3, 2

Here is how it adds an extra observability layer to MCP connections.
- Instead of having users manually grant AI agents access to applications and documents, the agent will connect directly to Okta’s internal communication platform.
- The platform determines if the request complies with enterprise policies.
- If the access request is approved, Okta issues a token to the AI agent. The agent presents the token to the communication platform and gets access to the needed tool.
This sign-on gives enterprise admins visibility into access logs and prevents unchecked interactions between teams, AI agents, and internal tools.
Challenge #3: MCP’s default ‘server’ approach does not blend well with serverless architectures
Over 95% of Fortune 500 companies are embedded in the Azure ecosystem that relies on serverless architectures. These infrastructures are poorly suited to MCP implementations, since Anthropic’s protocol is currently deployed as a Docker-packaged server.
Building and managing MCP servers on top of already stable serverless architectures increases maintenance overhead and adds to infrastructure costs in the long run.
MCP developers have released workarounds like streamable HTTP transport via FastMCP with FastAPI to support serverless deployment.
However, engineers who tried deploying serverless MCP in practice say it leaves a lot to be desired.
Ran Isenberg, a Solutions Architect and an opinion leader in serverless architecture, tried setting up an MCP agent in AWS Lambda and hit a few roadblocks on the way.
Cold start delays of up to 5 seconds made the system too slow for any time-sensitive workflows – imagine waiting 5 seconds every time your AI agent needed to access a tool.
Developer experience issues plagued the setup. As Isenberg put it, the process was “confusing, inconsistent, and far from intuitive.” There wasn’t a clear guide for how to set everything up properly.
Infrastructure complexity meant figuring out all the pieces manually, since there was no standard Infrastructure-as-Code template to follow.
Logging problems arose because FastAPI and FastMCP use different logging systems, and they didn’t play well with AWS Lambda’s standard monitoring tools.
Testing difficulties required manual VS Code configuration since there weren’t any streamlined tools for testing MCP server interactions in a serverless environment.
Isenberg’s conclusion about serverless MCP architectures was that they were “doable but far from seamless”.
Before these concerns are addressed in a frictionless, standardized, and reliable way, the proponents of serverless architecture deployed on AWS Lambda, Azure Functions, or Google Cloud Functions will be reluctant to embed MCP into internal systems.
How enterprises are solving this problem
As Nayan Paul, Chief Azure Architect at Accenture, put it in his blog, ‘unless MCP evolves to support serverless deployment options, I’ll likely keep building around it instead of inside it’.
Instead, he recommends battle-tested multi-agent system setups in LangChain and LangGraph built on top of Azure Functions or other serverless environments.
Accenture’s own agentic platform, AI Foundry, is built entirely in Azure Functions and is modular, cost-efficient, and easier to maintain than MCP servers.
Challenge #4: Tool poisoning
In April 2025, Invariant Labs discovered that MCP is vulnerable to tool poisoning, a type of attack where a prompt with malicious instructions is launched at the LLM.

The instructions are not visible to humans but understandable to the AI agent. Thus, a model, now armed with access to internal tools and data, can perform malicious actions, like:
- Extracting and sharing sensitive data like configuration files, databases, or SSH keys.
- Sharing private conversations with third parties
- Manipulate data so that any tool using it starts making wrong predictions.
Later, Invariant Labs followed up on the exploit by sharing a practical example of MCP-enabled tool poisoning. An attacker was able to extract a user’s WhatsApp message history by accessing WhatsApp’s MCP server and altering a seemingly innocent get_fact_of_the_day() tool.
Here are the instructions that the attacker ‘fed’ the LLM.

And here’s how they appear in Cursor: a large amount of white space before the message.

How enterprises are solving this problem
As Sam Willison points out in his blog post on this vulnerability, despite prompt injection being around for over 2 years, machine learning engineers still don’t have a single best way to deal with it.
He encourages engineering teams to follow MCP specifications and make sure there’s a human in the loop between the agent and the tools it uses.
AI agents should also be designed with transparency in mind, which means:
- Have a clear UI that clarifies which tools are exposed to AI
- Provide notifications or other indicators whenever an agent invokes a service
- Ask users for confirmation on mission-critical actions like data manipulation or extraction to adhere to HITL principles.
Invariant Labs, the team that discovered the exploit, also built an MCP security scanner – an open-source project that scans MCP servers and prompts for code vulnerabilities and hidden instructions.
Enterprise organizations should consider foolproofing their MCP architectures with similar off-the-shelf systems or building an in-house alternative.
Challenge #5. Multi-tenancy and scalability gaps
The majority of MCP servers are still single-user machines running locally on a developer’s machine or a single endpoint.
MCP servers supporting multiple agents and concurrent users are fairly recent and have architecture gaps, like authorization gaps, explored in this post.
To support enterprise-grade scale, MCP servers will have to be deployed as a microservice that serves many agents at a time.
That type of architecture creates a new layer of considerations:
- A server should be capable of handling concurrent requests
- It needs to separate data contexts
- There should be a rate limit per client for better resource management
Enterprise-ready MCP servers that meet multi-tenancy requirements are still a weaker part of the ecosystem, although it is maturing rapidly.
How enterprises are solving this problem
Engineering teams are experimenting with MCP Gateways, endpoints that aggregate several MCP servers. This orchestration layer enables multi-tenancy, helps enforce policies like rate limits or access tracking, and orchestrates tool selection by routing the agent to the most relevant server.
Addy Osmani, an engineer currently working on Google Chrome, also expects enterprise teams to build internal tool discovery platforms and registries.
Whenever an AI agent needs to act, it consults this catalog and chooses the best available server.
The bottom line on MCP’s enterprise readiness
Like any new technology, the Model Context Protocol is not perfect. Its ecosystem is still maturing, standardization is lacking, and security exploits are discovered on the fly.
But even these shortcomings do not take away from MCP’s brilliance as a concept and its transformative impact on enterprise operations. If the protocol keeps up its current growth streak, it will likely become the technology that helps AI agents go mainstream.
In 2-3 years, we are looking at enterprise companies where AI agents are full-on “virtual co-workers” and are treated as first-class citizens, with separate workflows, tasks, and KPIs.
Once MCP’s security and large-scale deployments are ironed out, it will be the driver of composable and adaptable workflows that automate nearly 100% of routine tasks, allowing employees to focus on strategic “heavy lifting” that is both more rewarding for the company and fulfilling for teams.
For now, MCP works best for organizations that have the technical expertise to build custom solutions and can accept some risk in exchange for early-mover advantages in AI automation.