Published on

How MCP servers work with clients

This describes the process for a chatbot client like Claude or ChatGPT to use tools provided by MCP servers:

  1. Server Registration and Tool Discovery The chatbot client is configured with the locations of MCP servers. It communicates with these servers to discover the tools they offer and retrieve their detailed definitions (name, description, parameters).

  2. Providing Tools to the LLM At the beginning of a conversation turn, the client sends the LLM the definitions of the available tools. These definitions are formatted according to the specific requirements of the LLM platform.

  3. LLM Assesses Tool Need The LLM receives the user's prompt and the list of available tools. It analyzes the request to determine if invoking one of the provided tools would be necessary or helpful to generate an appropriate response.

  4. LLM Requests Tool Invocation If the LLM decides to use a tool, its response to the client is not a final text response but a structured request. This request specifies which tool the client should run and includes the predicted input parameters for that tool.

  5. Client Executes the Tool The chatbot client receives the LLM's request. It identifies the correct MCP server hosting the specified tool and executes the tool, passing the required input parameters.

  6. MCP Server Returns Result The MCP server processes the request, runs the tool's function, and returns the result of that execution to the chatbot client.

  7. Client Provides Result to LLM The client takes the result obtained from the MCP server and sends it back to the LLM. This updates the LLM's context with the outcome of the tool execution.

  8. LLM Finalizes Response With the tool's result now available, the LLM incorporates this information to generate its final, user-facing response, completing the turn of the conversation.

This allows LLMs to access real-time data or data that is not publicly available and perform specific actions beyond their initial training data, making chatbots much more useful than just the underlying LLM.