Skip to content

Memory

TIE stores user memories in a knowledge graph (powered by Graphiti and Neo4j). It automatically extracts entities, relationships, and facts from conversations and uses them to give the LLM context about the user in future requests.

All memory endpoints require a Bearer token from TIE Auth.

  1. On every request, TIE fetches relevant memories for the authenticated user
  2. These are injected into the LLM's context as background knowledge
  3. The LLM can also call memory_search, memory_write, and memory_get tools server-side (invisible to the client)
  4. After each conversation turn, TIE records the exchange for future memory retrieval

No extra API calls are needed — memory injection is automatic.

Each agent is configured with a memory scope that controls how memories are shared:

ScopeBehaviorExample
sharedMemories are accessible across all agents using the shared poolchatbot, research-assistant
agentMemories are isolated to that specific agentrag-assistant

Memory scope is set per agent — clients cannot override it. See Agents for per-agent scope details.

TIE uses a multi-tier retrieval architecture for memory. The fastest retrieval happens from a hot in-memory cache. After several minutes of inactivity, user data transitions to a slower tier — the next request falls back to a direct database query while a background job repopulates the cache.

You can signal to TIE that a user is about to start chatting, prompting the system to proactively load their knowledge graph into cache before the first request arrives. The best time to do this is when the user logs in or opens your app.

Terminal window
POST https://your-tie-host/memories/graph/{user_id}/warm
Authorization: Bearer $TOKEN
ParameterInRequiredDescription
user_idpathYesThe user's UUID
agent_idqueryNoAgent whose memory scope to warm. Defaults to the shared graph, which covers most agents. Only needed for agent-scoped memory (e.g. rag-assistant).

Response:

{
"entities": [
{ "name": "Project Atlas", "summary": "A migration project the user is leading" },
{ "name": "Bruno", "summary": "Backend developer on the Finerminds team" }
],
"facts": [
"User is working on TIE Auth integration",
"Staging deployment target is April 1st"
]
}

The result is cached for 5 minutes. Subsequent chat requests during that window return instantly from cache. If the cache has expired by the time a chat request arrives, TIE uses a fast database query while triggering a background refresh. Calling this endpoint again performs a full search and re-populates the cache.

TIE automatically learns from conversations, but sometimes you need to feed it information from outside the chat — a CRM profile, user preferences from onboarding, health metrics, or notes from another system. The Graph API lets you push arbitrary data into a user's knowledge graph so the LLM can reference it in future conversations.

Terminal window
POST https://your-tie-host/memories/graph
Authorization: Bearer $TOKEN
Content-Type: application/json

Request body:

{
"data": "User is a senior engineer at Mindvalley. Prefers dark mode. Timezone is GMT+8.",
"type": "text",
"agent_id": "chatbot",
"source_description": "Onboarding profile import",
"created_at": "2026-04-01T00:00:00Z"
}
FieldTypeRequiredDescription
datastringYesThe content to ingest (max 50,000 characters)
type"text" | "json" | "message"YesFormat of the data (see below)
agent_idstringYesDetermines memory scope — shared agents write to the shared graph, agent-scoped agents write to their own
source_descriptionstringNoLabel describing where the data came from (max 500 characters)
created_atISO 8601 datetimeNoWhen the data was originally created (defaults to now)

Data types:

TypeUse forExample
textFree-form notes, profile descriptions, paragraphs"User enjoys hiking and lives in Kuala Lumpur"
jsonStructured records from APIs or databases"{\"name\": \"Alice\", \"role\": \"engineer\"}"
messageConversation-style exchanges"user: I love dark mode\nassistant: Noted!"

Response (202 Accepted):

{
"status": "processing"
}

The response returns immediately. Entity extraction and relationship building happen in the background — processing time varies depending on data size and system load. After processing completes, the new data will appear in Warming the User Cache results and List Memories responses.

TIE uses Graphiti to extract entities, relationships, and facts from the text you send. The quality of extraction depends heavily on how you format the data. Graphiti needs declarative statements — not questions, single words, or ambiguous fragments.

Write facts, not questions. Graphiti extracts meaning from factual statements. A question like "What do you want to do?" contains no information to store.

FormatExtractable?Why
"User wants to focus on improving their health"YesClear entity (user) + fact (goal is health)
"What's something you want to pull off? — Life"NoA question with a one-word answer — no extractable relationship
"User's goal for the next few months is related to Life"MarginalBetter, but "Life" is too vague to create a useful entity

For onboarding or survey data, rephrase question-answer pairs into declarative statements on the client side before sending:

{
"data": "The user's top priority for the next few months is improving their fitness and running a half marathon.",
"type": "text",
"agent_id": "chatbot",
"source_description": "Onboarding survey response"
}

If the user's answer is too vague to form a useful statement (e.g. a single word like "Life"), consider either skipping ingestion or wrapping it with the question for context:

{
"data": "When asked about their goals for the next few months, the user answered: Life.",
"type": "text",
"agent_id": "chatbot"
}

For conversation-style data, use the message type with role prefixes. This is the same format TIE uses internally when flushing conversation history to memory:

{
"data": "user: I've been meditating every morning for the past month\nassistant: That's great progress! How long are your sessions?",
"type": "message",
"agent_id": "chatbot"
}

For structured records from APIs or databases, use the json type. Graphiti will parse the structure and extract entities from the fields:

{
"data": "{\"name\": \"Alice Chen\", \"role\": \"Senior Engineer\", \"team\": \"Platform\", \"timezone\": \"GMT+8\"}",
"type": "json",
"agent_id": "chatbot",
"source_description": "CRM profile sync"
}

Returns the full knowledge graph — entities (nodes) and their relationships (edges) — for a user. Designed for rendering an interactive graph in your frontend.

Terminal window
GET https://your-tie-host/memories/graph
Authorization: Bearer $TOKEN
ParameterInRequiredDescription
agent_idqueryNoScope to a specific agent. Omit to get memories across all agents.
limitqueryNoMax edges per page (default 200, max 500)
cursorqueryNoUUID from next_cursor for pagination

Response:

{
"nodes": [
{
"id": "node-uuid-1",
"name": "Fauzaan",
"summary": "AI builder working on TIE platform",
"labels": ["Person"]
},
{
"id": "node-uuid-2",
"name": "Mindvalley",
"summary": "EdTech company",
"labels": ["Organization"]
}
],
"edges": [
{
"id": "edge-uuid-1",
"source": "node-uuid-1",
"target": "node-uuid-2",
"label": "works_at",
"fact": "Fauzaan works at Mindvalley",
"created_at": "2026-04-07T10:30:00+00:00",
"valid_at": "2026-04-07T10:30:00+00:00"
}
],
"next_cursor": null
}

Each node is an entity (person, place, project, concept) with:

  • id — unique identifier, use as the node key in your graph
  • name — display label
  • summary — short description (good for tooltips)
  • labels — entity types like Person, Organization, Project

Each edge is a fact connecting two nodes:

  • source / target — node IDs this edge connects
  • label — relationship type (e.g. works_at, knows, prefers)
  • fact — the full natural-language statement
  • created_at / valid_at — when the fact was recorded / became true

Pagination: When next_cursor is not null, pass it as ?cursor={value} to fetch the next page. Each page returns up to limit edges and all nodes referenced by those edges.

Example with pagination:

Terminal window
# First page
GET /memories/graph?limit=100
# Next page (using next_cursor from previous response)
GET /memories/graph?limit=100&cursor=edge-uuid-100

Example scoped to one agent:

Terminal window
GET /memories/graph?agent_id=chatbot&limit=50

Memory is something a user accumulates over time — facts, preferences, relationships, and context that an agent has learned about them. There are three common reasons to move that memory around:

  • Account migration — a user is switching to a new account and wants to take their memory with them
  • Backup — capture a snapshot now so you can restore if something goes wrong later
  • Data portability — give the user a copy of what TIE has learned about them (useful for GDPR-style "export my data" requests)

The export endpoint produces a single JSON document. The import endpoint consumes that same document and writes it into the caller's graph. Both sides span every agent scope for the user in one call — you do not loop per-agent.

Think of it like git clone for a user's knowledge graph: one command pulls the whole repository, one command pushes it back somewhere else. Where this metaphor breaks down: embeddings (the vector representations Graphiti uses for semantic search) are not re-generated automatically on export, and they are rebuilt asynchronously on import — so searches against freshly imported data can return empty for a few minutes while the backfill runs.

Terminal window
GET https://your-tie-host/memories/export
Authorization: Bearer $TOKEN
ParameterInRequiredDescription
include_episodesqueryNoInclude raw :Episodic nodes (conversation chunks the graph was extracted from). Defaults to false — episodes are much larger than the extracted graph and contain raw PII.
include_expiredqueryNoInclude edges the user previously deleted (soft-deleted via expired_at). Defaults to false. Useful for full archival; skip for normal use.
include_embeddingsqueryNoInclude the vector embeddings on nodes and edges. Defaults to false. Embeddings are regeneratable and inflate payload size roughly 10x.

Response:

{
"schema_version": "1",
"exported_at": "2026-04-20T08:15:42.123456+00:00",
"source_user_id": "aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee",
"counts": {
"entity_nodes": 142,
"entity_edges": 389,
"episodes": 0
},
"entity_nodes": [
{
"uuid": "n-abc",
"labels": ["User"],
"properties": {
"name": "Alice",
"summary": "Senior engineer at Mindvalley",
"created_at": "2026-01-12T09:00:00+00:00"
}
}
],
"entity_edges": [
{
"uuid": "e-xyz",
"type": "RELATES_TO",
"source_uuid": "n-abc",
"target_uuid": "n-def",
"properties": {
"fact": "Alice works at Mindvalley",
"valid_at": "2026-01-12T09:00:00+00:00"
}
}
],
"episodes": []
}

Each envelope field:

  • schema_version — envelope format version. The import endpoint only accepts "1" today.
  • source_user_id — the user the export was taken from. Import uses this to rewrite scoping on the target user.
  • counts — record totals, mirrored by the lengths of the arrays below.
  • entity_nodes — extracted entities (people, places, projects, topics) with their attributes.
  • entity_edges — facts connecting two entities. fact is the human-readable statement; source_uuid / target_uuid reference entries in entity_nodes.
  • episodes — raw conversation chunks, populated only when include_episodes=true.

Size cap: exports above 50,000 total records return 413 Payload Too Large. If you hit this, export per-agent (?agent_id=...) is not currently supported on /memories/export — contact the TIE team to raise the cap.

Accept an exported envelope and write it into the caller's graph. Intended for the same account (migration, restore) or across accounts (data transfer between users).

Terminal window
POST https://your-tie-host/memories/import
Authorization: Bearer $TOKEN
Content-Type: application/json

Request body — a JSON envelope produced by GET /memories/export. Unchanged envelopes can be re-sent; see the idempotency note below.

Response (202 Accepted):

{
"status": "processing",
"import_id": "b7a1c3ef9e5c4d80b6ab1b3b5b5c2c9d",
"nodes_written": 142,
"edges_written": 387,
"episodes_written": 0,
"edges_skipped": ["e-orphaned-1", "e-orphaned-2"]
}

Response fields:

  • import_id — pass to GET /memories/import/{import_id} to poll for embedding regeneration status.
  • nodes_written / edges_written / episodes_written — counts of records freshly created in the target graph. Records whose UUID already exists are skipped (see below).
  • edges_skipped — UUIDs of edges whose source_uuid or target_uuid did not match any node in the target graph. These are dropped silently; fix the source export or ignore them.

Scoping — what happens to group_id. Memory in TIE is scoped by group_id, which looks like {user_id}_{agent_id}. On import, the envelope's source_user_id is stripped from every record's group_id and the caller's user ID is substituted. The agent suffix (e.g. __shared__, chatbot) is preserved. A record originally under source-user_chatbot becomes caller-user_chatbot in the target graph.

Idempotency — skip-if-exists by UUID. Import is safe to re-run. Any node, edge, or episode whose UUID already exists in the target graph is left untouched — nothing is overwritten, and no duplicates are created. This means:

  • Re-sending the same envelope produces nodes_written: 0 on the second call.
  • Facts the user has accumulated since the export are not clobbered.
  • There is no partial-state concern: a crashed import can be retried safely.

Error cases:

StatusWhen
400schema_version is unsupported, source_user_id is missing, or the envelope is malformed
413Total records exceed the import cap (50,000)
501The memory backend is disabled for this deployment

Poll the status of an async import. Useful for showing a progress indicator or gating the UI until semantic search is ready.

Terminal window
GET https://your-tie-host/memories/import/{import_id}
Authorization: Bearer $TOKEN

Response:

{
"import_id": "b7a1c3ef9e5c4d80b6ab1b3b5b5c2c9d",
"status": "ready",
"nodes_written": 142,
"edges_written": 387,
"episodes_written": 0,
"nodes_embedded": 142,
"edges_embedded": 387,
"edges_skipped_count": 2,
"started_at": "2026-04-20T08:15:42.123456+00:00",
"finished_at": "2026-04-20T08:18:07.456789+00:00",
"error": null
}

Status values:

  • processing — the embedding backfill is still running. Search will miss freshly imported data.
  • ready — backfill complete, semantic search fully covers the imported records.
  • failed — backfill hit an error. Inspect error for details. The graph records themselves are still written; only embeddings are missing. Contact support or re-trigger by importing the same envelope again (idempotent).

nodes_embedded and edges_embedded count only records from this import, not pre-existing records in the user's graph. They will stop incrementing once status is ready or failed.

Terminal window
GET https://your-tie-host/memories/{agent_id}
Authorization: Bearer $TOKEN

Response:

[
{
"id": "edge-abc123",
"title": "Works on TIE Auth",
"name": "TIE Auth integration",
"content": "User is integrating TIE Auth with Finerminds",
"memory_type": "long_term",
"source": "edge"
}
]
Terminal window
DELETE https://your-tie-host/memories/{agent_id}/{memory_id}
Authorization: Bearer $TOKEN

Response:

{
"status": "deleted",
"memory_id": "edge-abc123"
}

Returns 404 if the memory is not found.