How do I call gemini-2.5-pro from my code?
Use the OpenAI or Anthropic SDK and point baseURL at https://synapse.garden/api/v1. Set model: ‘google/gemini-2.5-pro’ and supply your Synapse Garden API key. No code changes beyond the base URL.
google/gemini-2.5-pro idGemini 2.5 Pro is our most advanced reasoning Gemini model, capable of solving complex problems. Gemini 2.5 Pro can comprehend vast datasets and challenging problems from different information sources, including text, audio, images, video, and even entire code repositories.
# Drop-in OpenAI-compatible client$ import { generateText } from 'ai'$$ const { text } = await generateText({$ model: 'google/gemini-2.5-pro',$ baseURL: 'https://synapse.garden/api/v1',$ apiKey: process.env.MG_KEY,$ prompt: 'Why is the sky blue?',$ })
| Rate | Per million tokens · USD |
|---|---|
| Input | $1.38/M |
| Output | $11.00/M |
| Cache read | $0.138/M |
Use the OpenAI or Anthropic SDK and point baseURL at https://synapse.garden/api/v1. Set model: ‘google/gemini-2.5-pro’ and supply your Synapse Garden API key. No code changes beyond the base URL.
Input: $1.38/M per million tokens. Output: $11.00/M per million tokens. The free tier includes a million tokens every month at no cost.
gemini-2.5-pro supports a context window of 1.0M tokens, with a maximum output of 65.5K tokens.
No. Synapse Garden is the single API surface — one key gives you OpenAI, Anthropic, Google, Meta, Mistral, DeepSeek, xAI, Cohere, and more. Billing, rate limits, and audit logs are unified.
Sign up, create a key, drop our base URL into your existing client. The free tier includes a million tokens every month — no credit card.