Skip to main content
Docs by LangChain home page
JavaScript
Search...
⌘K
OSS (v1-alpha)
LangChain and LangGraph
Providers
Providers
OpenAI
Anthropic
Google
AWS
Microsoft
General integrations
Chat models
Tools and Toolkits
LLMs
Key-value stores
Document transformers
Model caches
Memory
Callbacks
RAG integrations
Retrievers
Embeddings
Vector stores
Document loaders
Docs by LangChain home page
JavaScript
Search...
⌘K
GitHub
Forum
Forum
Search...
Navigation
General integrations
Model caches
LangChain
LangGraph
Integrations
Learn
Reference
Contributing
LangChain
LangGraph
Integrations
Learn
Reference
Contributing
GitHub
Forum
General integrations
Model caches
Copy page
Copy page
Caching LLM calls
can be useful for testing, cost savings, and speed.
Below are some integrations that allow you to cache results of individual LLM calls using different caches with different strategies.
Azure Cosmos DB NoSQL Semantic Cache
View guide
Was this page helpful?
Yes
No
Document transformers
Memory
⌘I
Assistant
Responses are generated using AI and may contain mistakes.