Models & providers

Browse 100+ models — language, vision, audio, embedding, image, video. Pricing is honest and synced nightly.

FIG.
FIG. 00 · MODELS & PROVIDERSMODEL CATALOG
FIG. 01MODEL ROUTING
SCHEMATIC
Every request resolves a `creator/slug` model id against the live catalog, picks an upstream provider, and dispatches. Per-project allowlists narrow which slugs your key can call.

How to specify a model

Always use the creator/model-slug format:

model: "openai/gpt-5.4"
model: "anthropic/claude-opus-4.6"
model: "google/gemini-3.1-pro-preview"
model: "meta/llama-4-405b"
model: "bfl/flux-2-flex"            // image
model: "google/veo-3.1-generate-001" // video

A model id maps 1:1 to its (creator, slug) pair. The full live catalog is at /models — filter by modality, search by capability. With the AI SDK, pass the same id to streamText and swap models without touching the rest of the call.

Modalities

Every model has one or more modalities. Filter or branch on them:

ModalityWhat it meansExample
TextPlain text in / text outopenai/gpt-5.4
VisionAccepts image input alongside textopenai/gpt-5.4, google/gemini-3.1-pro-preview
AudioAccepts audio inputgoogle/gemini-2.5-pro (multimodal)
EmbeddingReturns vectorsopenai/text-embedding-3-large
RerankingScores docs against a querycohere/rerank-english-v3.0
Image generationGenerates images from promptsbfl/flux-2-flex, google/imagen-4.0-generate-001
Video generationGenerates video from promptsgoogle/veo-3.1-generate-001, klingai/kling-v2.6-i2v
Multimodal models can do several at once

For example, openai/gpt-5.4 is text + vision. google/gemini-3-pro-image is text + vision + image generation. The catalog page lists every supported modality per model.

Catalog freshness

The catalog is synced nightly from the routing layer. New models appear within 24 hours of upstream availability. Pricing is the live rate — the number you see on /models is the number you pay.

const res = await fetch("https://synapse.garden/api/v1/models", {
  headers: { Authorization: `Bearer ${process.env.MG_KEY}` },
})
const { data: models } = await res.json()

// Filter by type
const text = models.filter((m) => m.type === "language")
const image = models.filter((m) => m.type === "image")
const video = models.filter((m) => m.type === "video")
const res = await fetch(
  "https://synapse.garden/api/v1/models/openai/gpt-5.4/endpoints",
  { headers: { Authorization: `Bearer ${process.env.MG_KEY}` } },
)
const { data } = await res.json()

console.log(data.architecture.input_modalities)  // ["text", "image"]
console.log(data.endpoints[0].pricing.prompt)    // "0.0000025"

Try a model

Pick a model and prompt
Live · sandbox key

Switch between models to compare their personality, speed, and price. Uses a docs-only sandbox key.

Model
Output appears here.
Sandbox · ratelimited 5/min/IPGet your own key →

Pricing math

Prices on the catalog page are the list price you pay. They include our flat margin baked in — no separate "markup" line on your bill.

For the full math (passthrough + flat 10% DX premium), see /legal/pricing-disclosure.

Choosing a model

Quick heuristics:

  • Frontier reasoningopenai/gpt-5.5-pro, anthropic/claude-opus-4.6 with extended thinking
  • Production workhorseopenai/gpt-5.4, anthropic/claude-sonnet-4.6
  • High-volume cheapopenai/gpt-5.4-mini, google/gemini-2.5-flash
  • Edge / latency-sensitiveopenai/gpt-5.4-nano, cerebras/*
  • Long contextgoogle/gemini-3.1-pro-preview (1M+), anthropic/claude-opus-4.6 (1M variant)
  • Image generationgoogle/gemini-3-pro-image, bfl/flux-2-flex
  • Video generationgoogle/veo-3.1-generate-001, klingai/kling-v2.6-i2v

When in doubt, run the playground above with the same prompt across two or three models and compare.