AI SDK (recommended)
One API across every model. Streaming, tools, structured output, multimodal — all type-safe.
The AI SDK is the recommended way to talk to Synapse Garden. It gives you a single, type-safe API for chat, streaming, tools, structured output, embeddings, image generation, and video generation — across every model in the catalog. We've designed our infrastructure around AI SDK semantics, so it's the path of least friction. Under the hood it's an OpenAI-compatible custom provider pointed at our base URL.
You can use the OpenAI or Anthropic SDKs too, but you'll be writing more code, doing more provider-specific dancing, and shipping less.
Install
pnpm add ai zodThe AI SDK lives in the ai package. zod is for structured output schemas — pull it in if you don't have it already.
Chat / streaming
import { generateText, streamText } from "ai"
// Non-streaming
const { text } = await generateText({
model: "openai/gpt-5.4",
baseURL: "https://synapse.garden/api/v1",
apiKey: process.env.MG_KEY,
prompt: "Why is the sky blue?",
})
// Streaming
const result = streamText({
model: "openai/gpt-5.4",
baseURL: "https://synapse.garden/api/v1",
apiKey: process.env.MG_KEY,
prompt: "Tell me about Synapse Garden.",
})
for await (const part of result.textStream) {
process.stdout.write(part)
}The same generateText / streamText works for every text model in our catalog. Swap openai/gpt-5.4 for anthropic/claude-opus-4.6 or google/gemini-3.1-pro-preview and ship.
Sharing config
Setting baseURL and apiKey on every call is repetitive. Move it to a single helper:
// lib/ai.ts
import { generateText, streamText, embed, embedMany } from "ai"
const config = {
baseURL: "https://synapse.garden/api/v1",
apiKey: process.env.MG_KEY!,
} as const
export const ai = {
generateText: (opts: Omit<Parameters<typeof generateText>[0], "baseURL" | "apiKey">) =>
generateText({ ...config, ...opts }),
streamText: (opts: Omit<Parameters<typeof streamText>[0], "baseURL" | "apiKey">) =>
streamText({ ...config, ...opts }),
embed: (opts: Omit<Parameters<typeof embed>[0], "baseURL" | "apiKey">) =>
embed({ ...config, ...opts }),
embedMany: (opts: Omit<Parameters<typeof embedMany>[0], "baseURL" | "apiKey">) =>
embedMany({ ...config, ...opts }),
}Then everywhere else:
import { ai } from "@/lib/ai"
const { text } = await ai.generateText({
model: "openai/gpt-5.4",
prompt: "...",
})Structured output
Pass a Zod schema, get a typed object back:
import { generateObject } from "ai"
import { z } from "zod"
const Schema = z.object({
title: z.string(),
bullets: z.array(z.string()),
sentiment: z.enum(["positive", "neutral", "negative"]),
})
const { object } = await ai.generateObject({
model: "openai/gpt-5.4",
schema: Schema,
prompt: "Summarize this support ticket: ...",
})
object.title // string
object.bullets // string[]
object.sentiment // 'positive' | 'neutral' | 'negative'The AI SDK validates with Zod after generation. If validation fails, it retries with the parser error fed back to the model. See Structured output for the full guide.
Streaming structured output
import { streamObject } from "ai"
const result = ai.streamObject({
model: "openai/gpt-5.4",
schema: Schema,
prompt: "...",
})
for await (const partial of result.partialObjectStream) {
// partial: Partial<z.infer<typeof Schema>> — fields fill in as they arrive
renderProgress(partial)
}
const final = await result.object // fully validatedTool use
import { generateText, tool } from "ai"
import { z } from "zod"
const result = await ai.generateText({
model: "openai/gpt-5.4",
prompt: "What's the weather in Tokyo?",
tools: {
getWeather: tool({
description: "Get current weather for a city",
parameters: z.object({ city: z.string() }),
execute: async ({ city }) => fetchWeather(city),
}),
},
maxSteps: 3, // multi-step orchestration
})
console.log(result.text)See Tool use for parallel calls, forced choices, and reasoning-model patterns.
Embeddings
import { embed, embedMany } from "ai"
const { embedding } = await ai.embed({
model: "openai/text-embedding-3-large",
value: "The quick brown fox",
})
const { embeddings } = await ai.embedMany({
model: "openai/text-embedding-3-large",
values: chunks,
})See Embeddings for the full RAG pattern.
Image generation
import { generateText, experimental_generateImage as generateImage } from "ai"
// Nano Banana family — uses generateText
const r1 = await ai.generateText({
model: "google/gemini-3-pro-image",
prompt: "A red panda eating bamboo, painterly style.",
})
const image = r1.files.find((f) => f.mediaType?.startsWith("image/"))
// Image-only models — uses experimental_generateImage
const r2 = await generateImage({
model: "bfl/flux-2-flex",
baseURL: "https://synapse.garden/api/v1",
apiKey: process.env.MG_KEY,
prompt: "A vibrant coral reef.",
aspectRatio: "16:9",
})
const buf = Buffer.from(r2.images[0].base64, "base64")See Image generation for the full split.
Video generation
import { experimental_generateVideo as generateVideo } from "ai"
const result = await generateVideo({
model: "google/veo-3.1-generate-001",
baseURL: "https://synapse.garden/api/v1",
apiKey: process.env.MG_KEY,
prompt: "A serene mountain landscape at sunset.",
duration: 8,
aspectRatio: "16:9",
})
fs.writeFileSync("output.mp4", result.videos[0].uint8Array)See Video generation.
Provider routing
providerOptions.gateway.* controls which providers serve a request:
ai.generateText({
model: "anthropic/claude-opus-4.6",
prompt: "...",
providerOptions: {
gateway: {
order: ["bedrock", "anthropic"], // try Bedrock first
sort: "cost", // among the rest, cheapest first
models: ["openai/gpt-5.4"], // fallback to gpt-5.4 if Claude fails
},
},
})See Provider routing.
Reasoning options
ai.generateText({
model: "openai/gpt-5.5",
prompt: "...",
providerOptions: {
openai: {
reasoningEffort: "high",
reasoningSummary: "auto",
},
},
})See Reasoning.
Caching
ai.generateText({
model: "anthropic/claude-sonnet-4.6",
system: largeSharedPrompt,
prompt: userQuery,
providerOptions: {
gateway: { caching: "auto" },
},
})See Caching.
Multi-modal messages
ai.generateText({
model: "openai/gpt-5.4",
messages: [
{
role: "user",
content: [
{ type: "text", text: "What's in this image?" },
{ type: "image", image: "https://example.com/photo.jpg" },
],
},
],
})See Vision input.
Idempotency
The AI SDK accepts a headers map you can use to attach Idempotency-Key:
ai.generateText({
model: "...",
prompt: "...",
headers: {
"Idempotency-Key": `req_${Date.now()}_${crypto.randomUUID()}`,
},
})See Errors & retries.
Why we recommend it
- Same code, every model. Provider differences are abstracted. Tool definitions, structured output, vision, caching — they all use a single shape that works everywhere.
- Type-safe. Zod schemas + TypeScript inference. Wrong types are caught at compile time, not as runtime parser errors.
- Streaming first.
streamTextandstreamObjectgive you token-level control with proper backpressure and cancellation. - First-class tool use. Cleaner API than the OpenAI / Anthropic SDKs — Zod schemas instead of hand-rolled JSON, automatic execution loop,
maxStepscap. - Active development. New AI SDK features (workflow primitives, MCP support, agent patterns) ship monthly. Synapse Garden is built to support every release on the day it lands.
Common patterns
// Server action returning a stream to a React component
"use server"
import { streamText } from "ai"
import { createStreamableValue } from "ai/rsc"
export async function generate(prompt: string) {
const stream = createStreamableValue("")
;(async () => {
const { textStream } = streamText({
model: "openai/gpt-5.4",
baseURL: "https://synapse.garden/api/v1",
apiKey: process.env.MG_KEY,
prompt,
})
for await (const part of textStream) stream.update(part)
stream.done()
})()
return { output: stream.value }
}// API route streaming back to a fetch caller
// app/api/ai/route.ts
import { streamText } from "ai"
export async function POST(req: Request) {
const { prompt } = await req.json()
const result = streamText({
model: "openai/gpt-5.4",
baseURL: "https://synapse.garden/api/v1",
apiKey: process.env.MG_KEY,
prompt,
})
return result.toTextStreamResponse()
}// Multi-model fallback chain
ai.generateText({
model: "openai/gpt-5.4",
prompt: "...",
providerOptions: {
gateway: {
models: ["anthropic/claude-opus-4.6", "google/gemini-3.1-pro-preview"],
},
},
})Resources
- AI SDK docs — the canonical reference
- Cookbook examples
- GitHub repo
- Synapse Garden-specific guides: Streaming · Tool use · Structured output · Embeddings