Quickstart

Sign up, create a key, ship your first request — three minutes flat.

FIG.
FIG. 00 · QUICKSTART3 MIN TO 200 OK

Three steps to your first successful response. The AI SDK is the recommended client — see streamText and generateText — but you can use the OpenAI SDK, the Anthropic SDK, or plain fetch interchangeably.

FIG. 01FIRST REQUEST
SCHEMATIC
Your AI SDK call lowers into a single HTTPS POST. Headers carry the bearer key + idempotency key; body carries the model id, messages, and `stream: true`. The AI SDK is a thin adapter — the wire shape is OpenAI-compatible.

1. Sign up

01

Create your account

Go to synapse.garden/signup. Sign up with email, Google, or GitHub. No credit card required. A workspace is created for you on first sign-in.

02

Create an API key

From your dashboard, click Keys → New API key. Pick a project (we create a default production project for you), give it a name, and copy the key — we only show the cleartext once.

03

Drop the key into your env

echo 'MG_KEY=mg_live_xxxxxxxxxxxxxxxxxxxxxxxx' >> .env.local

The key prefix tells you the environment: mg_live_* is production, mg_test_* is sandbox.

2. Ship your first request

Pick the SDK you already use. Synapse Garden speaks all three wire formats.

import { generateText } from "ai"

const { text } = await generateText({
  model: "openai/gpt-5.4",
  baseURL: "https://synapse.garden/api/v1",
  apiKey: process.env.MG_KEY,
  prompt: "Write a one-sentence bedtime story about a robot who likes haiku.",
})

console.log(text)
import OpenAI from "openai"

const client = new OpenAI({
  apiKey: process.env.MG_KEY,
  baseURL: "https://synapse.garden/api/v1",
})

const res = await client.chat.completions.create({
  model: "openai/gpt-5.4",
  messages: [
    { role: "user", content: "Write a one-sentence bedtime story about a robot." },
  ],
})

console.log(res.choices[0].message.content)
import Anthropic from "@anthropic-ai/sdk"

const client = new Anthropic({
  apiKey: process.env.MG_KEY,
  baseURL: "https://synapse.garden/api",
})

const msg = await client.messages.create({
  model: "anthropic/claude-opus-4.6",
  max_tokens: 256,
  messages: [
    { role: "user", content: "Write a one-sentence bedtime story about a robot." },
  ],
})

console.log(msg.content[0].text)

3. Stream the response

For real-time output:

import { streamText } from "ai"

const result = streamText({
  model: "openai/gpt-5.4",
  baseURL: "https://synapse.garden/api/v1",
  apiKey: process.env.MG_KEY,
  prompt: "Tell me about Synapse Garden.",
})

for await (const part of result.textStream) {
  process.stdout.write(part)
}

console.log("\nUsage:", await result.usage)
const stream = await client.chat.completions.create({
  model: "openai/gpt-5.4",
  messages: [{ role: "user", content: "Tell me about Synapse Garden." }],
  stream: true,
})

for await (const chunk of stream) {
  process.stdout.write(chunk.choices[0]?.delta?.content ?? "")
}
Same pricing, different latency

Streaming costs the same as non-streaming. The difference is perceived latency — first token typically arrives in 200–800ms depending on the model.

4. Try a different model

Every model in the catalog uses the creator/model-slug format. Swap the string and ship.

// OpenAI
model: "openai/gpt-5.4"
// Anthropic
model: "anthropic/claude-opus-4.6"
// Google
model: "google/gemini-3.1-pro-preview"
// Meta (open weights)
model: "meta/llama-4-405b"
// Mistral
model: "mistral/mistral-large-3"

Browse the full catalog at /models — filter by modality, search by capability, and see live pricing.

5. Live playground

Try a model right here without leaving the docs:

Send a request
Live · sandbox key

Pick a model, write a prompt, hit Run. Uses a docs-only sandbox key.

Model
Output appears here.
Sandbox · ratelimited 5/min/IPGet your own key →

What's next