Mem0 Open Source Overview
Mem0 Open Source delivers the same adaptive memory engine as the platform, but packaged for teams that need to run everything on their own infrastructure. You own the stack, the data, and the customizations.Mem0 v1.0.0 brought rerankers, async-by-default clients, and Azure OpenAI support. See the release notes for the full rundown before upgrading.
What Mem0 OSS provides
- Full control: Tune every component, from LLMs to vector stores, inside your environment.
- Offline ready: Keep memory on your own network when compliance or privacy demands it.
- Extendable codebase: Fork the repo, add providers, and ship custom automations.
Begin with the Python quickstart (or the Node.js variant) to clone the repo, configure dependencies, and validate memory reads/writes locally.
Choose your path
Python Quickstart
Bootstrap CLI and verify add/search loop.
Node.js Quickstart
Install TypeScript SDK and run starter script.
Configure Components
LLM, embedder, vector store, reranker setup.
Graph Memory Capability
Relationship-aware recall with Neo4j, Memgraph.
Tune Retrieval & Rerankers
Hybrid retrieval and reranker controls.
Deploy with Docker Compose
Reference deployment with REST endpoints.
Use the REST API
Async add/search flows and automation.
Need a managed alternative? Compare hosting models in the Platform vs OSS guide or switch tabs to the Platform documentation.
What you get with Mem0 OSS
What you get with Mem0 OSS
| Benefit | What you get |
|---|---|
| Full infrastructure control | Host on your own servers with complete access to configuration and deployment. |
| Complete customization | Modify the implementation, extend functionality, and tailor it to your stack. |
| Local development | Perfect for development, testing, and offline environments. |
| No vendor lock-in | Keep ownership of your data, providers, and pipelines. |
| Community driven | Contribute improvements and tap into a growing ecosystem. |
Default components
Mem0 OSS works out of the box with sensible defaults:
- LLM: OpenAI
gpt-4.1-nano-2025-04-14(viaOPENAI_API_KEY) - Embeddings: OpenAI
text-embedding-3-small - Vector store: Local Qdrant instance storing data at
/tmp/qdrant - History store: SQLite database at
~/.mem0/history.db - Reranker: Disabled until you configure a provider
Memory.from_config.Keep going
Need a managed alternative? Compare hosting models in the Platform vs OSS guide or switch tabs to the Platform documentation.