API Reference
POST /v1/images/generations
OpenAI-compatible image generation. Same shape as the OpenAI Images API, every supported provider behind one endpoint.
FIG.
FIG. 00 · POST /V1/IMAGES/GENERATIONSPROMPT → PIXELS
/v1/images/generations is OpenAI-compatible — same path, same body shape, same response. Pass any image model in the catalog (openai/dall-e-3, recraft/recraft-v3, bytedance/seedream-4.0, …) by provider/model-id and you get a unified response. Use the AI SDK's generateImage for a typed surface, or call fetch directly with the schema below.
FIG. 01SYNC OR JOB
SCHEMATICRequest
curl https://synapse.garden/api/v1/images/generations \
-H "Authorization: Bearer $MG_KEY" \
-H "Content-Type: application/json" \
-d '{
"model": "openai/dall-e-3",
"prompt": "A schematic blueprint of a koi pond, hairline strokes, washi paper",
"n": 1,
"size": "1024x1024",
"quality": "hd",
"response_format": "b64_json"
}'Body schema
| Field | Type | Required | Notes |
|---|---|---|---|
model | string | yes | provider/model-id. |
prompt | string | yes | 1–8 000 chars. |
n | integer | no | 1–8. Defaults to 1. Some models only support n: 1. |
size | string | no | WIDTHxHEIGHT (e.g. 1024x1024, 1792x1024). Provider-validated. |
quality | enum | no | standard | hd | low | medium | high (provider-specific values pass through). |
style | enum | no | vivid | natural. OpenAI-style hint; ignored by other providers. |
response_format | enum | no | url | b64_json. Defaults to b64_json (so you don't need any external storage). |
user | string | no | Caller-defined identifier. Max 256 chars. |
reference_image | string | no | Base64 or URL — for edit / image-to-image flows on supported models. |
providerOptions | object | no | Provider-namespaced overrides ({ openai: { background: "transparent" } }). |
Headers
Same surface as every /v1/* endpoint — see Authentication for the full list. The two you almost always want:
Authorization: Bearer mg_live_*x-mg-idempotency-key: <ulid>— image generation is non-deterministic; idempotency keys let safe retries return the original image instead of charging twice.
Response
{
"id": "img_01J9Z...",
"created": 1778430000,
"model": "openai/dall-e-3",
"data": [
{
"b64_json": "iVBORw0KGgoAAAANSUhEUgAA...",
"revised_prompt": "A schematic blueprint of a koi pond..."
}
],
"usage": { "input_tokens": 28, "output_images": 1 }
}| Field | Type | Notes |
|---|---|---|
data[].b64_json | string | Present when response_format: b64_json (default). |
data[].url | string | Present when response_format: url — short-lived (typically 1 h). |
data[].revised_prompt | string | OpenAI-style auto-rewritten prompt (if the upstream supports it). |
usage.input_tokens | integer | Tokens consumed by the prompt. |
usage.output_images | integer | Images returned (matches data.length). |
Errors
Standard /v1/* envelope. Common codes:
| Status | error.code | When |
|---|---|---|
| 400 | BAD_REQUEST | Invalid size format, n out of range, etc. |
| 402 | BUDGET_EXCEEDED | Project spend cap reached — image generation can be expensive. |
| 403 | MODEL_NOT_ALLOWED | Model not on the project's allowlist, or capability mismatch. |
| 429 | RATE_LIMITED | Per-key RPM exceeded. |
| 504 | UPSTREAM_TIMEOUT | Provider didn't return inside 5 min. Retry with the same idempotency key. |
Limits
- 8 000-char prompt.
n ≤ 8— many models cap at1.- Connection held up to 300 s. For longer pipelines, set an
x-mg-idempotency-keyand retry — duplicate writes are deduped against the ledger.