Skip to main content
Docs by LangChain home page
Python
Search...
⌘K
OSS (v1-alpha)
LangChain and LangGraph
Overview
v1.0
Get started
Install
Quickstart
Philosophy
Core components
Agents
Models
Messages
Tools
Short-term memory
Streaming
Advanced usage
Long-term memory
Context engineering
Structured output
Model Context Protocol (MCP)
Human-in-the-loop - Coming soon
Multi-agent
Retrieval
Runtime
Middleware
Use in production
Studio
Test
Deploy
Agent Chat UI
Observability
Docs by LangChain home page
Python
Search...
⌘K
GitHub
Forum
Forum
Search...
Navigation
Advanced usage
Human-in-the-loop - Coming soon
LangChain
LangGraph
Integrations
Learn
Reference
Contributing
LangChain
LangGraph
Integrations
Learn
Reference
Contributing
GitHub
Forum
On this page
Overview
Interrupt Config
Resuming
Common Patterns
Advanced usage
Human-in-the-loop - Coming soon
Copy page
Copy page
Alpha Notice:
These docs cover the
v1-alpha
release. Content is incomplete and subject to change.
For the latest stable version, see the v0
LangChain Python
or
LangChain JavaScript
docs.
This page covers how to use
human-in-the-loop
in LangChain.
Overview
Interrupt Config
Resuming
Common Patterns
Was this page helpful?
Yes
No
Model Context Protocol (MCP)
Multi-agent
⌘I
Assistant
Responses are generated using AI and may contain mistakes.