Skip to main content
Custom fact extraction prompts let you decide exactly which facts Mem0 records from a conversation. Define a focused prompt, give a few examples, and Mem0 will add only the memories that match your use case.
You’ll use this when…
  • A project needs domain-specific facts (order numbers, customer info) without storing casual chatter.
  • You already have a clear schema for memories and want the LLM to follow it.
  • You must prevent irrelevant details from entering long-term storage.
Prompts that are too broad cause unrelated facts to slip through. Keep instructions tight and test them with real transcripts.

Feature anatomy

  • Prompt instructions: Describe which entities or phrases to keep. Specific guidance keeps the extractor focused.
  • Few-shot examples: Show positive and negative cases so the model copies the right format.
  • Structured output: Responses return JSON with a facts array that Mem0 converts into individual memories.
  • LLM configuration: custom_fact_extraction_prompt (Python) or customPrompt (TypeScript) lives alongside your model settings.
  1. State the allowed fact types.
  2. Include short examples that mirror production messages.
  3. Show both empty ([]) and populated outputs.
  4. Remind the model to return JSON with a facts key only.

Configure it

Write the custom prompt

custom_fact_extraction_prompt = """
Please only extract entities containing customer support information, order details, and user information. 
Here are some few shot examples:

Input: Hi.
Output: {"facts" : []}

Input: The weather is nice today.
Output: {"facts" : []}

Input: My order #12345 hasn't arrived yet.
Output: {"facts" : ["Order #12345 not received"]}

Input: I'm John Doe, and I'd like to return the shoes I bought last week.
Output: {"facts" : ["Customer name: John Doe", "Wants to return shoes", "Purchase made last week"]}

Input: I ordered a red shirt, size medium, but received a blue one instead.
Output: {"facts" : ["Ordered red shirt, size medium", "Received blue shirt instead"]}

Return the facts and customer information in a json format as shown above.
"""
Keep example pairs short and mirror the capitalization, punctuation, and tone you see in real user messages.

Load the prompt in configuration

from mem0 import Memory

config = {
    "llm": {
        "provider": "openai",
        "config": {
            "model": "gpt-4.1-nano-2025-04-14",
            "temperature": 0.2,
            "max_tokens": 2000,
        }
    },
    "custom_fact_extraction_prompt": custom_fact_extraction_prompt,
    "version": "v1.1"
}

m = Memory.from_config(config_dict=config)
After initialization, run a quick add call with a known example and confirm the response splits into separate facts.

See it in action

Example: Order support memory

m.add("Yesterday, I ordered a laptop, the order id is 12345", user_id="alice")
The output contains only the facts described in your prompt, each stored as a separate memory entry.

Example: Irrelevant message filtered out

m.add("I like going to hikes", user_id="alice")
Empty results show the prompt successfully ignored content outside your target domain.

Verify the feature is working

  • Log every call during rollout and confirm the facts array matches your schema.
  • Check that unrelated messages return an empty results array.
  • Run regression samples whenever you edit the prompt to ensure previously accepted facts still pass.

Best practices

  1. Be precise: Call out the exact categories or fields you want to capture.
  2. Show negative cases: Include examples that should produce [] so the model learns to skip them.
  3. Keep JSON strict: Avoid extra keys; only return facts to simplify downstream parsing.
  4. Version prompts: Track prompt changes with a version number so you can roll back quickly.
  5. Review outputs regularly: Spot-check stored memories to catch drift early.