Why we built Synapse Garden
One API key, every model. The short version of how the proxy started, what it's for, and what we won't add to it.
- company
- philosophy
Most teams don't need a new LLM. They need the LLMs they already use to behave like one product instead of seven separate billing relationships, seven sets of keys, seven dashboards, and seven different ways to find out a request failed.
Synapse Garden is a proxy. You point your existing OpenAI or Anthropic SDK at our base URL, swap your key for an mg_live_*, and keep shipping. That's the whole interface.
What you get back is the boring infrastructure most teams build twice and regret both times: per-project keys, hard spend caps that actually work, atomic budget deduction, audit trails, model allowlists, retries, and one bill at the end of the month that ties to one set of usage numbers.
What you don't get: a new SDK to learn, a vector store, an "agent framework," or yet another opinion about how your prompts should look. We charge a flat 10% over passthrough cost and that's the only number you have to track. If we drop that, you switch back to the providers in an afternoon.
We'll write more here as we go. The bar is high. Every post on this surface has a real human's review on it and won't ship if it reads like AI marketing copy. If a post sounds generic, it's getting rewritten or pulled.
Synapse Publication
Field notes, technical write-ups, and benchmarks from the team building Synapse Garden.
- Deep dive
Vercel AI Elements: 20+ React components for AI apps explained
A walk-through of every AI Elements component, what each one solves, and where rolling your own still wins. Practical patterns, real composition.
- How-to
Vercel AI SDK chatbot tutorial: useChat, streaming, real patterns
A working production-grade chatbot built on Vercel AI SDK v6. Streaming with useChat, tool calls, persistence, and the patterns that hold up after the demo.