AgentsFlows

Migration flow · Cursor

Two-line repoint from `api.openai.com` to your Synapse Garden deployment, driven by Cursor's agent over MCP. Tools, JSON mode, vision, and embeddings keep their exact shape.

FIG.
FIG. 00 · MIGRATION FLOWOpenAI · 2-LINE SWAP

This flow takes a Cursor user from import OpenAI from "openai" to a working Synapse Garden integration — including a scoped key, a budget cap, and a post-deploy error tail — without leaving the editor. The wire shape stays OpenAI-compatible, so existing code that uses tools, JSON mode, vision, or embeddings keeps working unchanged. The one library you'll probably also want is the AI SDK's createOpenAICompatible for the next file you write; for the migration itself you don't change SDKs, you change two strings.

For background on the two-line swap and the gotchas (model-name prefix, ignored OpenAI-Organization header), keep Migrate from OpenAI direct open in another tab.

FIG. 01CUTOVER
SCHEMATIC
Cursor's agent reads the file, calls `migration_plan` against the snippet, applies the diff, mints a scoped key behind a confirmation, sets a project budget, deploys, then tails errors for the first hour. The OpenAI key stays live in parallel for one clean week before you decommission it.
01

Wire the MCP server into Cursor

Add the synapse-garden server to your project's .mcp.json. Cursor reads this on the next agent invocation:

// .mcp.json
{
  "mcpServers": {
    "synapse-garden": {
      "command": "npx",
      "args": ["-y", "@synapse-garden/mcp"],
      "env": { "MG_PAT": "${MG_PAT}" }
    }
  }
}

Cursor expands ${MG_PAT} from your shell. Don't paste cleartext into the file — it ends up in version control faster than you think.

02

Mint a migration-scoped PAT

Migrations need three things: read the catalog, mint a key for the new project, write a budget. Open /app/linear-prod/agent-tokens, create a token named cursor-migration, and grant exactly:

  • models:read
  • keys:write
  • governance:write
  • migration:write

Skip logs:readmigration_plan is a static analyzer, it doesn't read your traffic. Export the cleartext:

export MG_PAT=mg_pat_4d2e91c7f3ab68095cbe1d37

Tokens issued for migration flows expire 14 days after the last use. Rotate it from the same page once the cutover is clean.

03

Plan the diff

Open the file you want to migrate — say lib/ai/summarize.ts — in Cursor and prompt the agent: "Migrate this off OpenAI direct using Synapse Garden." The agent calls migration_plan with the source attached:

{
  "type": "tool_use",
  "name": "migration_plan",
  "input": {
    "source_kind": "openai-node",
    "language": "typescript",
    "snippet": "import OpenAI from 'openai'\n\nconst client = new OpenAI({ apiKey: process.env.OPENAI_API_KEY })\n\nexport async function summarize(input: string) {\n  const res = await client.chat.completions.create({\n    model: 'gpt-4o-mini',\n    messages: [{ role: 'user', content: `Summarize: ${input}` }]\n  })\n  return res.choices[0].message.content\n}\n",
    "target_project": "web-server"
  }
}

The response carries the diff, the env var changes, and the gotchas the static analyzer caught for this file:

{
  "diff": [
    { "file": "lib/ai/summarize.ts", "line": 3, "before": "const client = new OpenAI({ apiKey: process.env.OPENAI_API_KEY })", "after": "const client = new OpenAI({ apiKey: process.env.MG_KEY, baseURL: 'https://synapse.garden/api/v1' })" },
    { "file": "lib/ai/summarize.ts", "line": 7, "before": "model: 'gpt-4o-mini'", "after": "model: 'openai/gpt-4o-mini'" }
  ],
  "env_changes": {
    "add": ["MG_KEY"],
    "keep_for_now": ["OPENAI_API_KEY"]
  },
  "notes": [
    "Model id needs the 'openai/' prefix on our endpoint — every model is namespaced by creator.",
    "Your code does not set 'OpenAI-Organization' or 'OpenAI-Project' headers, so nothing to remove.",
    "Streaming is unchanged: `stream: true` flows through and the SSE shape is byte-identical."
  ],
  "estimated_files_touched": 1,
  "review_recommended": false
}

The agent reads notes, surfaces them inline, and waits for your nod before editing.

04

Apply the diff

Approve the change in Cursor; the editor applies the two-line patch. The resulting file:

import OpenAI from "openai"

const client = new OpenAI({
  apiKey: process.env.MG_KEY,
  baseURL: "https://synapse.garden/api/v1",
})

export async function summarize(input: string) {
  const res = await client.chat.completions.create({
    model: "openai/gpt-4o-mini",
    messages: [{ role: "user", content: `Summarize: ${input}` }],
  })
  return res.choices[0].message.content
}

No SDK swap, no rewriting of message shapes. Tools, response_format: { type: "json_object" }, and vision parts all keep working — they're OpenAI-compatible on the wire.

05

Mint the proxy key

The agent calls create_api_key for the new project. The first call lacks confirm: true so the server returns a summary; Cursor renders it as a card; you approve.

{
  "type": "tool_use",
  "name": "create_api_key",
  "input": {
    "project": { "slug": "web-server", "create_if_missing": true },
    "name": "web-server · summarize migration",
    "scopes": ["chat"],
    "environment": "live",
    "confirm": true
  }
}

Response (cleartext shown once):

{
  "key": {
    "id": "key_01J0F8C4VR2N7K5PQE",
    "prefix": "mg_live_",
    "cleartext": "mg_live_a629c3f01b78d4528ee6ff90c7314a5d",
    "project": "web-server",
    "scopes": ["chat"],
    "rotation_recommended_at": "2026-08-08T00:00:00Z"
  }
}

The agent appends MG_KEY=mg_live_… to .env.local, leaves OPENAI_API_KEY in place for the parallel-run week, and reminds you to add MG_KEY to your deployment secrets.

06

Set a project budget

Without a cap, a runaway prompt loop on day one is a five-figure surprise on day two. The agent estimates the safe ceiling from your current OpenAI bill (you tell it the number, or it pulls from analyze_spend if you've been running both keys for a few hours) and proposes:

{
  "type": "tool_use",
  "name": "set_project_budget",
  "input": {
    "project": "web-server",
    "monthly_cap_usd": 320,
    "soft_alert_pct": 72,
    "hard_cutoff_pct": 105
  }
}

The first call returns confirmation_required with this summary:

Set monthly cap on web-server to $320. Soft alert at 72% ($230.40) emails the project owner. Hard cutoff at 105% ($336) starts returning BUDGET_EXCEEDED on chat. No effect on existing requests in flight.

Approve, the agent re-issues with confirm: true, and the snapshot lands. The 105% hard cutoff is intentional — it lets a near-miss day finish cleanly and rolls into the next period the next morning.

07

Deploy and tail

Push the branch, deploy the preview, send real traffic. Five minutes after the first ten requests land, the agent runs:

{
  "type": "tool_use",
  "name": "tail_errors",
  "input": {
    "project": "web-server",
    "since": "10m",
    "limit": 50,
    "include_4xx": true
  }
}

A typical clean migration looks like:

{
  "errors": [],
  "checked": { "requests": 47, "window_seconds": 612 },
  "throughput_ok": true,
  "p95_latency_ms": 1843
}

If something does fail, the agent reads the error code (MODEL_NOT_FOUND, INVALID_ARGUMENT, RATE_LIMITED) and proposes a one-line fix. Common one: someone forgot to prefix a model id elsewhere in the codebase — migration_plan only saw the file you opened.

08

Decommission the OpenAI key

After seven days of clean traffic on the new key (no error spikes, no latency regressions vs. the OpenAI baseline), revoke OPENAI_API_KEY at OpenAI's dashboard and remove it from your deployment secrets. Keep the two cards from this PR pinned in Cursor as a runbook for the next file.

If you want a structured cutover checklist instead of the parallel-run window, the Migrate from OpenAI direct page documents the validation matrix per capability (tools, JSON mode, vision, embeddings, structured streaming).

What this bought you

  • The model id prefix and the baseURL are the only code changes. Everything else is governance the agent set up on your behalf — a scoped key, a project, a budget, an audit trail. See the MCP server tool surface for the full set of write tools the migration prompt can compose.
  • The OpenAI key stayed live during the cutover. There was no big-bang switch; you ran both for a week and decommissioned only after the error tail stayed flat.
  • Every mutation the agent ran is in /app/linear-prod/audit with the tool name, the args hash, and the Cursor client identifier. If a teammate asks "who set the $320 cap on web-server", the answer is one row away.

Next