Skip to main content
Docs by LangChain home page
LangChain + LangGraph
Search...
⌘K
GitHub
Try LangSmith
Try LangSmith
Search...
Navigation
General integrations
Model caches
LangChain
LangGraph
Deep Agents
Integrations
Learn
Reference
Contribute
TypeScript
Overview
All providers
Popular Providers
OpenAI
Anthropic
Google
AWS
Microsoft
General integrations
Chat models
Tools and Toolkits
LLMs
Key-value stores
Document transformers
Model caches
Callbacks
RAG integrations
Retrievers
Text splitters
Embedding models
Vector stores
Document loaders
Key-value stores
close
General integrations
Model caches
Copy page
Copy page
Caching LLM calls
can be useful for testing, cost savings, and speed.
Below are some integrations that allow you to cache results of individual LLM calls using different caches with different strategies.
Azure Cosmos DB NoSQL Semantic Cache
View guide
Edit the source of this page on GitHub.
Connect these docs programmatically
to Claude, VSCode, and more via MCP for real-time answers.
Was this page helpful?
Yes
No
Document transformers
Previous
Callbacks
Next
⌘I