In this post I will walk walkthrough MCP (Model Context Protocol) server and how can it be used.
Before going to MCP, I will start with Agent and its
components.
What is
an AI agent?
An AI agent is a software program powered by
artificial intelligence that can perceive its environment, make decisions,
and take actions to achieve specific goals—often autonomously and
interactively.
Key Characteristics of AI Agents:
- Autonomous:
Operates without constant human input.
- Goal-Directed:
Works toward a defined objective or outcome.
- Context-Aware:
Can understand and adapt to its environment or situation.
- Interactive:
Can respond to user input or interact with other systems/tools.
- Iterative
Reasoning: Often uses planning, memory, and feedback to improve
decisions over time.
A simple architecture or illustration of the Agent is shown
below:
The
components of Agent
As we can see the in the above diagram, there are 4 main
components:
LLM – A generative AI model
that generates text based on given inputs.
Tools – AI agent’s callable
actionable entity – ie. An API or function
Knowledge – Knowledge base
required by Agent to answer user’s query.
Eg. Product or Service info or service tickets or API documentation.
Memory – Some sort of
storage where the conversations are stored to provide the LLM context as LLM is
stateless.
What is
tool and why AI agent need it?
A tool is piece of code or function (written in any
programming language) or an API endpoint which performs some business logic.
For instance, sending an email, updating an order or
cancelling an order and giving the refund etc.
The agent performs Action(s) by calling tools exposed to it.
These tools can exists within the agent itself or hosted somewhere in case of
MCP (Model Context Protocol) server.
What is
MCP – Model Context Protocol?
“The Model Context Protocol (MCP) is an open protocol that
enables seamless integration between LLM applications and external data sources
and tools. Whether you're building an AI-powered IDE, enhancing a chat
interface, or creating custom AI workflows, MCP provides a standardised way to
connect LLMs with the context they need.”
From Model
Context Protocol · GitHub
With MCP, I can have only ONE agent interface and just add
MCP Servers to allow that agent to talk to external sources. It is like a pluggable
device, you just plug your new MCP Server(s) and start chatting or getting
insight from the source(s).
MCP is client server architecture as shown below diagram:
MCP client can be anything your agent, VS-Code, Claude or
other desktop application.
Building
MCP Server (for connecting to AdventureWorks db)
Let’s build a simple MCP server which pulls data from
AdventureWorks database.
Make sure the transport is set to “sse” also you need to set
following in your .env file:
Let’s start the server by running the following command:
$ python server.py
And note down the address as shown below:
Connecting
MCP Server through VSCode
Let’s test is MCP server, we have multiple options 1) MCP Inspector
2) Connecting from VS-Code.
I will be using VSCode and GitHub Copilot to test the MCP
Server. You can get more details from
Follow below steps:
Open VSCode and create a
folder called “.vscode”
Create a file called
mcp.json and add below json like shown below:
{
“servers” : {
"mcp-adventure-works":
{
"type":
"sse",
"url":
"http://localhost:8091/sse"
}
}
}
And click on “Start” and you will see the discovered tools
Follow below steps to configure the MCP tools:
1. Open Github Copilot as show
below:
2.
Open Copilot as Agent like shown below:
3. Configure tools by clicking
on tools button shown below:
4. Select MCP-Adventureworks:
5.
Now in the Copilot and ask
question regarding AdventureWorks
and click on Continue
Connecting
MCP Server through Custom AI Assistant App (Chainlit)
You can develop an AI-Assistant with Chainlit tool mentioned
here cookbook/mcp
at main · Chainlit/cookbook · GitHub
Your chainlit app should have @cl.on_mcp_connect decorator
like shown below:
Once your app is running, you should see a plug icon which
allows you to configure the MCP Servers like shown below:
Click on that and add MCP Server and give it a name and
provide MCP Server address (eg. http://localhost:8091)
like shown below:
Once it is added you should see all the available tools like
shown below:
We can see all the tools are shown if it is able to connect.
Just asking “list all schemas” and here is the response:
NOTE: The data from MCP Servers are sent to LLM so make sure
you have your LLM within your private network or LLM that does not use your
prompts for re-training/fine-tuning like Azure Open AI.
Happy exploring MCP.
No comments:
Post a Comment