Memory
TIE stores user memories in a knowledge graph (powered by Graphiti and Neo4j). It automatically extracts entities, relationships, and facts from conversations and uses them to give the LLM context about the user in future requests.
All memory endpoints require a Bearer token from TIE Auth.
How It Works
Section titled “How It Works”- On every request, TIE fetches relevant memories for the authenticated user
- These are injected into the LLM's context as background knowledge
- The LLM can also call
memory_search,memory_write, andmemory_gettools server-side (invisible to the client) - After each conversation turn, TIE records the exchange for future memory retrieval
No extra API calls are needed — memory injection is automatic.
Memory Scoping
Section titled “Memory Scoping”Each agent is configured with a memory scope that controls how memories are shared:
| Scope | Behavior | Example |
|---|---|---|
shared | Memories are accessible across all agents using the shared pool | chatbot, research-assistant |
agent | Memories are isolated to that specific agent | rag-assistant |
Memory scope is set per agent — clients cannot override it. See Agents for per-agent scope details.
Warming the User Cache
Section titled “Warming the User Cache”TIE uses a multi-tier retrieval architecture for memory. The fastest retrieval happens from a hot in-memory cache. After several minutes of inactivity, user data transitions to a slower tier — the next request falls back to a direct database query while a background job repopulates the cache.
You can signal to TIE that a user is about to start chatting, prompting the system to proactively load their knowledge graph into cache before the first request arrives. The best time to do this is when the user logs in or opens your app.
POST https://your-tie-host/memories/graph/{user_id}/warmAuthorization: Bearer $TOKEN| Parameter | In | Required | Description |
|---|---|---|---|
user_id | path | Yes | The user's UUID |
agent_id | query | No | Agent whose memory scope to warm. Defaults to the shared graph, which covers most agents. Only needed for agent-scoped memory (e.g. rag-assistant). |
Response:
{ "entities": [ { "name": "Project Atlas", "summary": "A migration project the user is leading" }, { "name": "Bruno", "summary": "Backend developer on the Finerminds team" } ], "facts": [ "User is working on TIE Auth integration", "Staging deployment target is April 1st" ]}The result is cached for 5 minutes. Subsequent chat requests during that window return instantly from cache. If the cache has expired by the time a chat request arrives, TIE uses a fast database query while triggering a background refresh. Calling this endpoint again performs a full search and re-populates the cache.
Add Data to Memory
Section titled “Add Data to Memory”TIE automatically learns from conversations, but sometimes you need to feed it information from outside the chat — a CRM profile, user preferences from onboarding, health metrics, or notes from another system. The Graph API lets you push arbitrary data into a user's knowledge graph so the LLM can reference it in future conversations.
POST https://your-tie-host/memories/graphAuthorization: Bearer $TOKENContent-Type: application/jsonRequest body:
{ "data": "User is a senior engineer at Mindvalley. Prefers dark mode. Timezone is GMT+8.", "type": "text", "agent_id": "chatbot", "source_description": "Onboarding profile import", "created_at": "2026-04-01T00:00:00Z"}| Field | Type | Required | Description |
|---|---|---|---|
data | string | Yes | The content to ingest (max 50,000 characters) |
type | "text" | "json" | "message" | Yes | Format of the data (see below) |
agent_id | string | Yes | Determines memory scope — shared agents write to the shared graph, agent-scoped agents write to their own |
source_description | string | No | Label describing where the data came from (max 500 characters) |
created_at | ISO 8601 datetime | No | When the data was originally created (defaults to now) |
Data types:
| Type | Use for | Example |
|---|---|---|
text | Free-form notes, profile descriptions, paragraphs | "User enjoys hiking and lives in Kuala Lumpur" |
json | Structured records from APIs or databases | "{\"name\": \"Alice\", \"role\": \"engineer\"}" |
message | Conversation-style exchanges | "user: I love dark mode\nassistant: Noted!" |
Response (202 Accepted):
{ "status": "processing"}The response returns immediately. Entity extraction and relationship building happen in the background — processing time varies depending on data size and system load. After processing completes, the new data will appear in Warming the User Cache results and List Memories responses.
Formatting Data for Extraction
Section titled “Formatting Data for Extraction”TIE uses Graphiti to extract entities, relationships, and facts from the text you send. The quality of extraction depends heavily on how you format the data. Graphiti needs declarative statements — not questions, single words, or ambiguous fragments.
Write facts, not questions. Graphiti extracts meaning from factual statements. A question like "What do you want to do?" contains no information to store.
| Format | Extractable? | Why |
|---|---|---|
"User wants to focus on improving their health" | Yes | Clear entity (user) + fact (goal is health) |
"What's something you want to pull off? — Life" | No | A question with a one-word answer — no extractable relationship |
"User's goal for the next few months is related to Life" | Marginal | Better, but "Life" is too vague to create a useful entity |
For onboarding or survey data, rephrase question-answer pairs into declarative statements on the client side before sending:
{ "data": "The user's top priority for the next few months is improving their fitness and running a half marathon.", "type": "text", "agent_id": "chatbot", "source_description": "Onboarding survey response"}If the user's answer is too vague to form a useful statement (e.g. a single word like "Life"), consider either skipping ingestion or wrapping it with the question for context:
{ "data": "When asked about their goals for the next few months, the user answered: Life.", "type": "text", "agent_id": "chatbot"}For conversation-style data, use the message type with role prefixes. This is the same format TIE uses internally when flushing conversation history to memory:
{ "data": "user: I've been meditating every morning for the past month\nassistant: That's great progress! How long are your sessions?", "type": "message", "agent_id": "chatbot"}For structured records from APIs or databases, use the json type. Graphiti will parse the structure and extract entities from the fields:
{ "data": "{\"name\": \"Alice Chen\", \"role\": \"Senior Engineer\", \"team\": \"Platform\", \"timezone\": \"GMT+8\"}", "type": "json", "agent_id": "chatbot", "source_description": "CRM profile sync"}Memory Graph (Visualization)
Section titled “Memory Graph (Visualization)”Returns the full knowledge graph — entities (nodes) and their relationships (edges) — for a user. Designed for rendering an interactive graph in your frontend.
GET https://your-tie-host/memories/graphAuthorization: Bearer $TOKEN| Parameter | In | Required | Description |
|---|---|---|---|
agent_id | query | No | Scope to a specific agent. Omit to get memories across all agents. |
limit | query | No | Max edges per page (default 200, max 500) |
cursor | query | No | UUID from next_cursor for pagination |
Response:
{ "nodes": [ { "id": "node-uuid-1", "name": "Fauzaan", "summary": "AI builder working on TIE platform", "labels": ["Person"] }, { "id": "node-uuid-2", "name": "Mindvalley", "summary": "EdTech company", "labels": ["Organization"] } ], "edges": [ { "id": "edge-uuid-1", "source": "node-uuid-1", "target": "node-uuid-2", "label": "works_at", "fact": "Fauzaan works at Mindvalley", "created_at": "2026-04-07T10:30:00+00:00", "valid_at": "2026-04-07T10:30:00+00:00" } ], "next_cursor": null}Each node is an entity (person, place, project, concept) with:
id— unique identifier, use as the node key in your graphname— display labelsummary— short description (good for tooltips)labels— entity types likePerson,Organization,Project
Each edge is a fact connecting two nodes:
source/target— node IDs this edge connectslabel— relationship type (e.g.works_at,knows,prefers)fact— the full natural-language statementcreated_at/valid_at— when the fact was recorded / became true
Pagination: When next_cursor is not null, pass it as ?cursor={value} to fetch the next page. Each page returns up to limit edges and all nodes referenced by those edges.
Example with pagination:
# First pageGET /memories/graph?limit=100
# Next page (using next_cursor from previous response)GET /memories/graph?limit=100&cursor=edge-uuid-100Example scoped to one agent:
GET /memories/graph?agent_id=chatbot&limit=50Export & Import Memory
Section titled “Export & Import Memory”Memory is something a user accumulates over time — facts, preferences, relationships, and context that an agent has learned about them. There are three common reasons to move that memory around:
- Account migration — a user is switching to a new account and wants to take their memory with them
- Backup — capture a snapshot now so you can restore if something goes wrong later
- Data portability — give the user a copy of what TIE has learned about them (useful for GDPR-style "export my data" requests)
The export endpoint produces a single JSON document. The import endpoint consumes that same document and writes it into the caller's graph. Both sides span every agent scope for the user in one call — you do not loop per-agent.
Think of it like git clone for a user's knowledge graph: one command pulls the whole repository, one command pushes it back somewhere else. Where this metaphor breaks down: embeddings (the vector representations Graphiti uses for semantic search) are not re-generated automatically on export, and they are rebuilt asynchronously on import — so searches against freshly imported data can return empty for a few minutes while the backfill runs.
Export Memory
Section titled “Export Memory”GET https://your-tie-host/memories/exportAuthorization: Bearer $TOKEN| Parameter | In | Required | Description |
|---|---|---|---|
include_episodes | query | No | Include raw :Episodic nodes (conversation chunks the graph was extracted from). Defaults to false — episodes are much larger than the extracted graph and contain raw PII. |
include_expired | query | No | Include edges the user previously deleted (soft-deleted via expired_at). Defaults to false. Useful for full archival; skip for normal use. |
include_embeddings | query | No | Include the vector embeddings on nodes and edges. Defaults to false. Embeddings are regeneratable and inflate payload size roughly 10x. |
Response:
{ "schema_version": "1", "exported_at": "2026-04-20T08:15:42.123456+00:00", "source_user_id": "aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee", "counts": { "entity_nodes": 142, "entity_edges": 389, "episodes": 0 }, "entity_nodes": [ { "uuid": "n-abc", "labels": ["User"], "properties": { "name": "Alice", "summary": "Senior engineer at Mindvalley", "created_at": "2026-01-12T09:00:00+00:00" } } ], "entity_edges": [ { "uuid": "e-xyz", "type": "RELATES_TO", "source_uuid": "n-abc", "target_uuid": "n-def", "properties": { "fact": "Alice works at Mindvalley", "valid_at": "2026-01-12T09:00:00+00:00" } } ], "episodes": []}Each envelope field:
schema_version— envelope format version. The import endpoint only accepts"1"today.source_user_id— the user the export was taken from. Import uses this to rewrite scoping on the target user.counts— record totals, mirrored by the lengths of the arrays below.entity_nodes— extracted entities (people, places, projects, topics) with their attributes.entity_edges— facts connecting two entities.factis the human-readable statement;source_uuid/target_uuidreference entries inentity_nodes.episodes— raw conversation chunks, populated only wheninclude_episodes=true.
Size cap: exports above 50,000 total records return 413 Payload Too Large. If you hit this, export per-agent (?agent_id=...) is not currently supported on /memories/export — contact the TIE team to raise the cap.
Import Memory
Section titled “Import Memory”Accept an exported envelope and write it into the caller's graph. Intended for the same account (migration, restore) or across accounts (data transfer between users).
POST https://your-tie-host/memories/importAuthorization: Bearer $TOKENContent-Type: application/jsonRequest body — a JSON envelope produced by GET /memories/export. Unchanged envelopes can be re-sent; see the idempotency note below.
Response (202 Accepted):
{ "status": "processing", "import_id": "b7a1c3ef9e5c4d80b6ab1b3b5b5c2c9d", "nodes_written": 142, "edges_written": 387, "episodes_written": 0, "edges_skipped": ["e-orphaned-1", "e-orphaned-2"]}Response fields:
import_id— pass toGET /memories/import/{import_id}to poll for embedding regeneration status.nodes_written/edges_written/episodes_written— counts of records freshly created in the target graph. Records whose UUID already exists are skipped (see below).edges_skipped— UUIDs of edges whosesource_uuidortarget_uuiddid not match any node in the target graph. These are dropped silently; fix the source export or ignore them.
Scoping — what happens to group_id. Memory in TIE is scoped by group_id, which looks like {user_id}_{agent_id}. On import, the envelope's source_user_id is stripped from every record's group_id and the caller's user ID is substituted. The agent suffix (e.g. __shared__, chatbot) is preserved. A record originally under source-user_chatbot becomes caller-user_chatbot in the target graph.
Idempotency — skip-if-exists by UUID. Import is safe to re-run. Any node, edge, or episode whose UUID already exists in the target graph is left untouched — nothing is overwritten, and no duplicates are created. This means:
- Re-sending the same envelope produces
nodes_written: 0on the second call. - Facts the user has accumulated since the export are not clobbered.
- There is no partial-state concern: a crashed import can be retried safely.
Error cases:
| Status | When |
|---|---|
400 | schema_version is unsupported, source_user_id is missing, or the envelope is malformed |
413 | Total records exceed the import cap (50,000) |
501 | The memory backend is disabled for this deployment |
Check Import Status
Section titled “Check Import Status”Poll the status of an async import. Useful for showing a progress indicator or gating the UI until semantic search is ready.
GET https://your-tie-host/memories/import/{import_id}Authorization: Bearer $TOKENResponse:
{ "import_id": "b7a1c3ef9e5c4d80b6ab1b3b5b5c2c9d", "status": "ready", "nodes_written": 142, "edges_written": 387, "episodes_written": 0, "nodes_embedded": 142, "edges_embedded": 387, "edges_skipped_count": 2, "started_at": "2026-04-20T08:15:42.123456+00:00", "finished_at": "2026-04-20T08:18:07.456789+00:00", "error": null}Status values:
processing— the embedding backfill is still running. Search will miss freshly imported data.ready— backfill complete, semantic search fully covers the imported records.failed— backfill hit an error. Inspecterrorfor details. The graph records themselves are still written; only embeddings are missing. Contact support or re-trigger by importing the same envelope again (idempotent).
nodes_embedded and edges_embedded count only records from this import, not pre-existing records in the user's graph. They will stop incrementing once status is ready or failed.
List Memories
Section titled “List Memories”GET https://your-tie-host/memories/{agent_id}Authorization: Bearer $TOKENResponse:
[ { "id": "edge-abc123", "title": "Works on TIE Auth", "name": "TIE Auth integration", "content": "User is integrating TIE Auth with Finerminds", "memory_type": "long_term", "source": "edge" }]Delete a Memory
Section titled “Delete a Memory”DELETE https://your-tie-host/memories/{agent_id}/{memory_id}Authorization: Bearer $TOKENResponse:
{ "status": "deleted", "memory_id": "edge-abc123"}Returns 404 if the memory is not found.