model
, letting it choose tools
to execute, and then finishing when it calls no more tools.
Middleware.beforeModel
: runs before model execution. Can update state or jump to a different node (model
,tools
,__end__
)Middleware.modifyModelRequest
: runs before model execution, to prepare the model request object. Can only modify the current model request object (no permanent state updates) and cannot jump to a different node.Middleware.afterModel
: runs after model execution, before tools are executed. Can update state or jump to a different node (model
,tools
,__end__
)
beforeModel
, modifyModelRequest
, afterModel
.
Using in an agent
model
must be either a string or a BaseChatModel. Will error if a function is passed. If you want to dynamically control the model, useAgentMiddleware.modifyModelRequest
prompt
must be either a string or None. Will error if a function is passed. If you want to dynamically control the prompt, useAgentMiddleware.modifyModelRequest
preModelHook
must not be provided. UseAgentMiddleware.beforeModel
instead.postModelHook
must not be provided. UseAgentMiddleware.afterModel
instead.
Built-in middleware
LangChain provides several built in middleware to use off-the-shelfSummarization
ThesummarizationMiddleware
automatically manages conversation history by summarizing older messages when token limits are approached. This middleware monitors the total token count of messages and creates concise summaries to preserve context while staying within model limits.
Key keatures:
- Automatic token counting and threshold monitoring
- Intelligent message partitioning that preserves AI/Tool message pairs
- Customizable summary prompts and token limits
- Long-running conversations that exceed token limits
- Multi-turn dialogues with extensive context
model
: Language model to use for generating summaries (required)maxTokensBeforeSummary
: Token threshold that triggers summarizationmessagesToKeep
: Number of recent messages to preserve (default: 20)tokenCounter
: Custom function for counting tokens (defaults to character-based approximation)summaryPrompt
: Custom prompt template for summary generationsummaryPrefix
: Prefix added to system messages containing summaries (default: ”## Previous conversation summary:”)
- Never splitting AI messages from their corresponding tool responses
- Preserving the most recent messages for continuity
- Including previous summaries in new summarization cycles
Human-in-the-loop
ThehumanInTheLoopMiddleware
enables human oversight and intervention for tool calls made by AI agents. This middleware intercepts tool executions and allows human operators to approve, modify, reject, or manually respond to tool calls before they execute.
Key features:
- Selective tool approval based on configuration
- Multiple response types (accept, edit, ignore, response)
- Asynchronous approval workflow using LangGraph interrupts
- Custom approval messages with contextual information
- High-stakes operations requiring human approval (database writes, file system changes)
- Quality control and safety checks for AI actions
- Compliance scenarios requiring audit trails
- Development and testing of agent behaviors
accept
: Execute the tool with original argumentsedit
: Modify arguments before execution -{ type: "edit", args: { action: "tool_name", args: { modified: "args" } } }
ignore
: Skip tool execution and terminate agentresponse
: Provide manual response instead of executing tool -{ type: "response", args: "Manual response text" }
toolConfigs
: Map of tool names to their approval settingsrequireApproval
: Whether the tool needs human approvaldescription
: Custom message shown during approval request
messagePrefix
: Default prefix for approval messages
Anthropic prompt caching
AnthropicPromptCachingMiddleware
is a middleware that enables you to enable Anthropic’s native prompt caching.
Prompt caching enables optimal API usage by allowing resuming from specific prefixes in your prompts.
This is particularly useful for tasks with repetitive prompts or prompts with redundant information.
Learn more about Anthropic Prompt Caching (strategies, limitations, etc.) here.
Custom Middleware
Middleware for agents are subclasses ofAgentMiddleware
, which implement one or more of its hooks.
AgentMiddleware
currently provides three different ways to modify the core agent loop:
before_model
: runs before the model is run. Can update state or exit early with a jump.modify_model_request
: runs before the model is run. Cannot update state or exit early with a jump.after_model
: runs after the model is run. Can update state or exit early with a jump.
jump_to
key to the state update with one of the following values:
"model"
: Jump to the model node"tools"
: Jump to the tools node"__end__"
: Jump to the end node
before_model
Runs before the model is run. Can modify state by returning a new state object or state update.
Signature:
modify_model_request
Runs before the model has run, but after all the before_model
calls.
These functions cannot modify permanent state or exit early.
Rather, they are intended to modify calls to the model in a **stateless* way.
If you want to modify calls to the model in a **stateful** way, you will need to use before_model
Modifies the model request. The model request has several key properties:
-
model
(BaseChatModel
): the model to use. Note: this needs to the base chat model, not a string. -
system_prompt
(str
): the system prompt to use. Will get prepended tomessages
-
messages
(list of messages): the message list. Should not include system prompt. -
tool_choice
(Any): the tool choice to use -
tools
(list ofBaseTool
): the tools to use for this model call -
responseFormat
(ResponseFormat
): the response format to use for structured output
after_model
Runs after the model is run. Can modify state by returning a new state object or state update.
Signature:
New state keys
Middleware can extend the agent’s state with custom properties, enabling rich data flow between middleware components and ensuring type safety throughout the agent execution.State extension
Middleware can define additional state properties that persist throughout the agent’s execution. These properties become part of the agent’s state and are available to all hooks for said middleware. When a middleware defines required state properties through itsstateSchema
, these properties must be provided when invoking the agent:
Context extension
This is currently only available in JavaScript.
Combining multiple middleware
When using multiple middleware, their state and context schemas are merged. All required properties from all middleware must be satisfied:Agent-level context schema
Agents can also define their own context requirements that combine with middleware requirements:Best practices
- Use State for Dynamic Data: Properties that change during execution (user session, accumulated data)
- Use Context for Configuration: Static configuration values (API keys, feature flags, limits)
- Provide Defaults When Possible: Use
.default()
in Zod schemas to make properties optional - Document Requirements: Clearly document what state and context properties your middleware requires
- Type Safety: Leverage TypeScript’s type checking to catch missing properties at compile time
Middleware execution order
You can provide multiple middlewares. They are executed in the following logic:beforeModel
: Are run in the order they are passed in. If an earlier middleware exits early, then following middleware are not run
modifyModelRequest
: Are run in the order they are passed in.
afterModel
: Are run in the reverse order that they are passed in. If an earlier middleware exits early, then following middleware are not run
Agent jumps
In order to exit early, you can add ajump_to
key to the state update with one of the following values:
"model"
: Jump to the model node"tools"
: Jump to the tools node"__end__"
: Jump to the end node
model
node, all beforeModel
middleware will run. It’s forbidden to jump to model
from an existing beforeModel
middleware.
Example usage: