What is the Model Context Protocol (MCP)? A beginner’s guide to MCP servers
Anthropic created the Model Context Protocol (MCP) to function as the connective tissue between AI applications and external tools.
Much like USB-C ports connect computers and cameras and external hard drives, MCPs enable Large Language Models (LLMs) to access the tools and instructions they need in order to execute next-level workflows and become truly agentic in nature.
What is MCP in AI?
MCP in AI is what allows AI agents to access and act on data with minimal human interaction. By connecting AI applications with databases, email accounts, travel itineraries, and more, MCPs enable AI agents to request and gather information needed in order to complete tasks, rather than using only what data sets the AI has already been trained on.
Why the Model Context Protocol matters for agentic AI
Until recently, AI chatbots like Claude and ChatGPT have been able to use the data they have been trained on to field and answer simple requests, needing more and more input in order to continue a conversation or return information.
This limited field of data has led to unfulfilled requests from AI, or in AI “hallucinating” and giving incorrect or incomplete answers to questions, based on gaps in its training data sets.
Now, using Model Context Protocols and the tools that these MCPs can integrate, AI agents can become truly agentic, accessing more content as needed in order to perform more complex assignments, all within prescribed boundaries.
Anthropic calls this “progressive disclosure,” where AI agents are given just enough context to take their next step, and then just enough context for the step after that. In this way, agentic AI can begin to tackle more complex workflows, without needing to access more data than necessary — or more processing time and energy than necessary.
What is an MCP server?
Okay, but what is an MCP? What’s an MCP Server? An MCP Client?
Model Context Protocols have a few different pieces that make up one ongoing conversation:
- MCP Host: This is the AI application that “hosts” one or more MCP clients. These interact with an MCP client to make requests.
- MCP Client: This is the protocol-level component that enables a server connection with the AI application.
- MCP Server: This is the program that is responsible for providing services and context to MCP Clients, and then onto the MCP Host, a.k.a. AI application.
Once I’ve connected Claude with an MCP Client, I ask Claude to complete a task. Claude, through the MCP Client, sends a request for information to an MCP Server. It becomes the MCP Server’s job to listen to the request and respond in whatever way it is able: providing the data, prompt templates, or tools available to it back to the MCP Client, and consequently, to Claude to perform my task request.
By connecting an AI application to a server that contains assets, that AI application is able to perform the data and tool discovery it needs in order to complete ongoing tasks. And that’s if an AI application is connected to just one server. Where MCP will truly shine is when multiple servers work alongside each other, providing specialized capabilities that stack on top of one another through AI orchestration.
A Note on MCP Servers: An MCP Server refers to the program that “serves” the context to the MCP Client, regardless of whether the MCP Server is local or remote. There are two modes of transport from client to server: stdio for standard, local transports, and server-sent events (SSE) for remote transports.
MCP protocol messages explained
MCP stands for Model Context Protocol, and that’s what it provides: contexts and protocols for Large Language Models to use and follow.
If this is a common request I make when onboarding new employees, I do not want to take that time to upload each of these items to Claude each time I make the request — it would be just as fast to send the email myself. But if I have provided access to Claude to these items through an MCP client-server architecture, Claude can fulfill the task (repeatably!) within seconds.
Here’s how the MCP client-server architecture would work in this scenario:
Requests
An AI application uses an MCP Client to send a “request” to its connected MCP Server. This request asks the server what it has available and if those items are enough to fulfill the task assigned.
Our Example: Claude, through the MCP Client, sends a request to the MCP Server.
Responses
The MCP Server then responds with what tools or templates it has available.
The MCP Client is responsible for grabbing the data necessary and managing the LLM tool execution.
Our Example: Our MCP Server has access to all of the following: employee email database, email account, and a copy of the fillable PDF necessary. The MCP Server tells the client.
Errors
MCP Servers can return error messages if they are unable to fulfill a discovery request. Users then need to troubleshoot the protocol to assess what information or tool or resource is missing.
Our Example: If we didn’t have all of the necessary items loaded onto our MCP Server, the request would be unable to be fulfilled and the MCP Server would return an error.
Notifications
Taking the request-response action to a new level, MCP also supports real-time notifications so that MCP Servers can update MCP Clients when the server’s available tools have changed. They can also provide completion reports, notify clients of resource changes, and more.
This upends the traditional model of AI agent as reactive-only: now the AI system can alert users when a problem occurs, rather than waiting to be fed a report to analyze to see if there is a problem.
Our Example: With notifications, the MCP Server could alert Claude when a new employee is added to the employee directory, which could then initiate this process of sending the email with the handbook and read receipt, saving the user time and ensuring compliance.
MCP vs. APIs vs. Tool/Function Calling vs. Plugins vs. Agent Frameworks
While related, MCPs, APIs, and Tool/Function Calling all have slightly different roles to play in the agentic landscape:
- Application Programming Interfaces (APIs) are specific sets of rules and protocols that allow two specific softwares to exchange data.
- Tool/Function Calling enables LLMs to reach beyond their current internal knowledge and interact with external systems when a query requires it, but remain a single call/single response.
- Plugins are specific application or functionality packages that extend a software’s capabilities, like a browser add-on.
- Agent Frameworks define how AI agents think and act. They can orchestrate workflow, but need an MCP or plugin to enable the action.
- MCPs leverage all of the above in order to make them accessible to LLMs, and are capable of integrating multiple layers across multiple client-server relationships.
MCP vs. API Integrations
MCPs and APIs aren’t different versions of the same thing: they’re different layers.
Whereas APIs connect specific softwares, MCPs connect all available software — and beyond into tools, databases, protocols, and more.
MCP vs. Function Calling / Tool Calling
Function/Tool calling is the most simplistic version of AI asking for information. It makes a request, gets a predefined response, and that ends the transaction.
MCP creates an ongoing conversation: the AI agent can request, get a response, and make another request — even of a different server — to get more information for task completion.
One is not better than the other: they serve different purposes. Function calling is far more limited, but quickly executes a single request-response task like gathering current weather data.
MCP and its ability to continue a conversation by uncovering more or different information using additional tools or servers, and receiving notifications when new information is available to it, is providing truly autonomous capabilities.
MCP vs. Plugins and Agent Frameworks
Both Plugins and MCPs extend the capabilities of AI, but with different architectures. Plugins are usually a packaged bundle of services that can be installed in a specific platform for a specific use, and their proprietary nature makes them less flexible.
Agent Frameworks are a pre-made architecture that handles orchestration for an AI agent to use predetermined tools and libraries. They take LLMs from the level of chatbot to an agent that can have goals and handle some planning and tool use, as well as collaborate with other agents, but within a set structure.
MCP is far more flexible. Besides being an open protocol rather than a proprietary one, MCPs create connections across AI tools and data sources, not merely within a predetermined software package or stack.
And with MCP, you can move your customer toolset from one AI integration to another, rather than be confined to a specific stack that may or may not be moving forward as fast as another.
MCP security challenges (And how to mitigate them)
As with all connections to external resources, there are security challenges with MCP.
Here’s a few things to consider before in order to secure connections between AI models and external tools:
- Enforce strict identity access using OAuth 2.1, which ensures only permitted users can access sensitive data.
- Connect information only where necessary — use sandboxing and isolation to limit MCP server access.
- Use Least-Privilege Authorization to only give minimum necessary permissions to AI agents.
- Create verified tool registries to limit unauthorized integrations.
- Capture and log activity to ensure compliance and audit capabilities.
- Build in Human-in-the-Loop (HITL) checkpoints to require human interaction in high-risk environments.
Using MCP for governed enterprise data
There are advantages to using MCP for governed enterprise data as opposed to one-off AI interactions with data. IT teams can treat MCP as a standardized connection between AI agents and their internal databases, enabling the MCP to become a secure, auditable point of connection.
MCP brings automation to data quality by allowing AI agents to securely and directly interact with enterprise data systems. Ataccama ONE Agentic has the capability to create, text, and deploy data quality rules. This agentic data quality platform can run end-to-end workflows autonomously, catching issues early and sending alerts for necessary Human-in-the-Loop interactions.
Ataccama ONE acts as the single source of truth for governed, business-defined rules. This leads to consistent data governance that scales.
Explore Ataccama MCP
In the end, AI agents are only as useful as the tools they have access to. By integrating your data management platform with purpose-built agentic capabilities, you’ll have the tools you need for end-to-end data observability — now and in the future.
See how Ataccama ONE Agenti can put MCP to work for your business.
FAQs
- What is the model context protocol (MCP)?
The model context protocol (MCP) is an open-source standard that enables connections between AI applications and external systems. - How does the model context protocol (MCP) work?
The model context protocol (MCP) works by using an AI host, an MCP Client, and an MCP server to create a client-server architecture that can provide the necessary data and tools for autonomous AI agents to complete tasks. - What is an MCP server?
An MCP server is a local or external server that provides prompts, resources, and tools when requested by an MCP Client. - Is MCP secure, and how do you mitigate the biggest risks?
MCP can be secure with proper risk mitigation. Focusing on strict identity access between MCP Clients and external MCP Servers is one way to mitigate security risks. Another is to have Least-Privilege Authorizations and Human-in-the-Loop protocols in place.
David Lazar
David is the Head of Digital Marketing at Ataccama, bringing eight years of experience in the data industry, including his time at Instarea, a data monetization company within the Adastra Group. He holds an MSc. from the University of Glasgow and is passionate about technology and helping businesses unlock the full potential of their data.