Skip to main content
Spin up Mem0 with the Node SDK in just a few steps. You’ll install the package, initialize the client, add a memory, and confirm retrieval with a single search.

Prerequisites

  • Node.js 18 or higher
  • (Optional) OpenAI API key stored in your environment when you want to customize providers

Install and run your first memory

1

Install the SDK

npm install mem0ai
2

Initialize the client

import { Memory } from "mem0ai/oss";

const memory = new Memory();
3

Add a memory

const messages = [
  { role: "user", content: "I'm planning to watch a movie tonight. Any recommendations?" },
  { role: "assistant", content: "How about thriller movies? They can be quite engaging." },
  { role: "user", content: "I'm not a big fan of thriller movies but I love sci-fi movies." },
  { role: "assistant", content: "Got it! I'll avoid thriller recommendations and suggest sci-fi movies in the future." }
];

await memory.add(messages, { userId: "alice", metadata: { category: "movie_recommendations" } });
4

Search memories

const results = await memory.search("What do you know about me?", { userId: "alice" });
console.log(results);
Output
{
  "results": [
    {
      "id": "892db2ae-06d9-49e5-8b3e-585ef9b85b8e",
      "memory": "User is planning to watch a movie tonight.",
      "score": 0.38920719231944799,
      "metadata": {
        "category": "movie_recommendations"
      },
      "userId": "alice"
    }
  ]
}
By default the Node SDK uses local-friendly settings (OpenAI gpt-4.1-nano-2025-04-14, text-embedding-3-small, in-memory vector store, and SQLite history). Swap components by passing a config as shown below.

Configure for production

import { Memory } from "mem0ai/oss";

const memory = new Memory({
  version: "v1.1",
  embedder: {
    provider: "openai",
    config: {
      apiKey: process.env.OPENAI_API_KEY || "",
      model: "text-embedding-3-small"
    }
  },
  vectorStore: {
    provider: "memory",
    config: {
      collectionName: "memories",
      dimension: 1536
    }
  },
  llm: {
    provider: "openai",
    config: {
      apiKey: process.env.OPENAI_API_KEY || "",
      model: "gpt-4-turbo-preview"
    }
  },
  historyDbPath: "memory.db"
});

Manage memories (optional)

const allMemories = await memory.getAll({ userId: "alice" });
console.log(allMemories);
// Audit history
const history = await memory.history("892db2ae-06d9-49e5-8b3e-585ef9b85b8e");
console.log(history);

// Delete specific or scoped memories
await memory.delete("892db2ae-06d9-49e5-8b3e-585ef9b85b8e");
await memory.deleteAll({ userId: "alice" });

// Reset everything
await memory.reset();

Use a custom history store

The Node SDK supports Supabase (or other providers) when you need serverless-friendly history storage.
import { Memory } from "mem0ai/oss";

const memory = new Memory({
  historyStore: {
    provider: "supabase",
    config: {
      supabaseUrl: process.env.SUPABASE_URL || "",
      supabaseKey: process.env.SUPABASE_KEY || "",
      tableName: "memory_history"
    }
  }
});
Create the Supabase table with:
create table memory_history (
  id text primary key,
  memory_id text not null,
  previous_value text,
  new_value text,
  action text not null,
  created_at timestamp with time zone default timezone('utc', now()),
  updated_at timestamp with time zone,
  is_deleted integer default 0
);

Configuration parameters

Mem0 offers granular configuration across vector stores, LLMs, embedders, and history stores.
ParameterDescriptionDefault
providerVector store provider (e.g., "memory")"memory"
hostHost address"localhost"
portPort numberundefined
ParameterDescriptionProvider
providerLLM provider (e.g., "openai", "anthropic")All
modelModel to useAll
temperatureTemperature valueAll
apiKeyAPI keyAll
maxTokensMax tokens to generateAll
topPProbability thresholdAll
topKToken count to keepAll
openaiBaseUrlBase URL overrideOpenAI
ParameterDescriptionDefault
providerGraph store provider (e.g., "neo4j")"neo4j"
urlConnection URLprocess.env.NEO4J_URL
usernameUsernameprocess.env.NEO4J_USERNAME
passwordPasswordprocess.env.NEO4J_PASSWORD
ParameterDescriptionDefault
providerEmbedding provider"openai"
modelEmbedding model"text-embedding-3-small"
apiKeyAPI keyundefined
ParameterDescriptionDefault
historyDbPathPath to history database"{mem0_dir}/history.db"
versionAPI version"v1.0"
customPromptCustom processing promptundefined
ParameterDescriptionDefault
providerHistory provider"sqlite"
configProvider configurationundefined
disableHistoryDisable history storefalse
const config = {
  version: "v1.1",
  embedder: {
    provider: "openai",
    config: {
      apiKey: process.env.OPENAI_API_KEY || "",
      model: "text-embedding-3-small"
    }
  },
  vectorStore: {
    provider: "memory",
    config: {
      collectionName: "memories",
      dimension: 1536
    }
  },
  llm: {
    provider: "openai",
    config: {
      apiKey: process.env.OPENAI_API_KEY || "",
      model: "gpt-4-turbo-preview"
    }
  },
  historyStore: {
    provider: "supabase",
    config: {
      supabaseUrl: process.env.SUPABASE_URL || "",
      supabaseKey: process.env.SUPABASE_KEY || "",
      tableName: "memories"
    }
  },
  disableHistory: false,
  customPrompt: "I'm a virtual assistant. I'm here to help you with your queries."
};

What’s next?

If you have any questions, please feel free to reach out: