Introduction

LLMs are becoming increasingly capable of interacting with the outside world or accessing data that they are not trained on using tools.

These tools function like APIs or plugins that models can call to fetch real-time information, perform actions, or reason beyond their training knowledge.

But what if your own AI agent could become a tool that other LLMs or agents can discover and use dynamically?

In this post, you’ll learn how to expose a LangGraph agent as an MCP tool, making it usable by any Model Context Protocol (MCP) client for example the LangGraph MCP client, OpenAI’s MCP client, or even your own custom-built one.

This unlocks a powerful pattern for building composable, reusable, and cross-agent AI workflows or what we can now call "Agent-as-a-Service".

What is MCP?

The Model Context Protocol (MCP) is an open protocol introduced by Anthropic. It standardizes how applications provide context to LLMs. Think of MCP like USB-C for AI applications. Just as USB-C provides a standardized way to connect your devices to various peripherals and accessories, MCP provides a standardized way to connect AI models to different data sources and tools.

Why Expose a LangGraph Agent as an MCP Tool?

How to expose LangGraph Agent as an MCP tool.

LangGraph server which is an API server designed for creating and managing agent based application implements MCP protocol using streamable-http transport.

This allow a LangGraph agent to be exposed as an MCP tool making the agent usable with MCP client.

Requirements

To use LangGraph mcp server ensure you have the following dependencies installed:

langgraph-api >= 0.2.3

langgraph-sdk >= 0.1.61