Overview
TIE is a platform of two services that work together:
TIE Auth — Authentication service that handles user registration, login (email/password + OAuth), and token management. Built on Google Identity Platform / Firebase.
TIE AI Gateway — OpenAI-compatible API that sits between your application and LLM providers. Adds persistent memory, personas, observability, and client-side tool calling — while exposing the same /v1/chat/completions endpoint that every OpenAI SDK already knows.
How They Work Together
Section titled “How They Work Together”flowchart LR
App["Your App"]
subgraph TIE["TIE Platform"]
Auth["Auth"]
Gateway["AI Gateway"]
end
App -- "1. Sign up or log in" --> Auth
Auth -- "2. Get back a token" --> App
App -- "3. Send messages" --> Gateway
Gateway -- "4. AI response" --> App
style App fill:#f9f9f9,stroke:#d1d1d1,color:#181818
style Auth fill:#fff7ed,stroke:#F68220,color:#181818
style Gateway fill:#fff7ed,stroke:#F68220,color:#181818
style TIE fill:#fff7ed08,stroke:#F68220,color:#181818
That's it — 4 steps:
- Get a token — Call TIE Auth to sign up or log in. You get back a Bearer token.
- Use the token — Pass it as
Authorization: Bearer <token>on every AI request. - Send messages —
POST /v1/chat/completionswith your messages, just like OpenAI. - Get AI responses — TIE handles memory, personas, and model routing automatically. You just get the response back.
TIE Auth Features
Section titled “TIE Auth Features”- OAuth sign-in — Google and Apple, via client-side token flow
- Email/password — Registration and login
- Token management — JWT tokens with 1-hour expiry, refresh tokens for re-authentication
- User profiles — Metadata, roles, account suspension
TIE AI Gateway Features
Section titled “TIE AI Gateway Features”- OpenAI-compatible — Drop-in replacement for OpenAI, Cloudflare AI Gateway, or LiteLLM
- Custom prompt mode — Send your own
systemordevelopermessage to override the agent's built-in prompt while keeping memory and personas - Memory — Persistent user memory via Graphiti knowledge graph, injected automatically
- Personas — Custom system prompts per user, injected automatically
- Client tool calling — Pass tool definitions per-request, TIE routes tool_calls back for execution
- Internal tools — Memory search/write execute server-side, invisible to the client
- Thread persistence — Maintain conversation state across requests
- Audio — Speech-to-text and text-to-speech via OpenAI-compatible endpoints
- Image Generation — Generate images via OpenAI-compatible
/v1/images/generationswith OpenAI and Google Imagen models - Observability — All requests traced via Langfuse
- Multi-agent — Select different agents via
X-Agent-Idheader
Supported Providers
Section titled “Supported Providers”- Anthropic (via Vertex AI) — Claude Haiku 4.5, Claude Sonnet 4.5, Claude Sonnet 4.6, Claude Opus 4.5, Claude Opus 4.6
- OpenAI — GPT-5 Nano, GPT-5 Mini, GPT-5.1
- Google Vertex AI — Gemini 2.5 Flash, Gemini 2.5 Pro, Gemini 3 Flash
Query GET /info on your instance to see which models are currently available.
Documentation
Section titled “Documentation”| Page | Description |
|---|---|
| Authentication | Sign up, log in, OAuth, token management, and refresh flows |
| Chat Completions | Send messages via the OpenAI-compatible /v1/chat/completions endpoint |
| Audio | Speech-to-text and text-to-speech via /v1/audio/transcriptions and /v1/audio/speech |
| Image Generation | Generate images via /v1/images/generations with OpenAI and Google Imagen |
| Agents | Multi-agent routing, agent selection, and the X-Agent-Id header |
| Threads | Conversation threads — list, rename, delete, and history |
| Memory | Persistent memory, context prefetch, and memory management |
| Personas | Custom system prompts per user — create, activate, and manage |