Tool use & function calling

Let the model call your functions. Multi-step orchestration, structured arguments, parallel calls.

FIG.
FIG. 00 · TOOL USEMODEL ⇄ FUNCTIONS

Tool use (also called "function calling") lets the model decide to invoke one of your functions with structured arguments, see the result, and continue reasoning. The AI SDK has the cleanest abstraction for this — a tools map with Zod schemas — and Synapse Garden routes it through the same OpenAI-/Anthropic-compatible wire format you'd use directly.

FIG. 01AGENTIC LOOP
SCHEMATIC
Each iteration: the model decides whether to emit a `text` answer or `tool_calls`. Your runtime executes the tool calls in parallel, appends `tool_result` messages, and loops back. The terminal state is a text answer with no tool calls. The AI SDK's `maxSteps` caps the loop.

Single-step tool

import { streamText, tool } from "ai"
import { z } from "zod"

const result = streamText({
  model: "openai/gpt-5.4",
  baseURL: "https://synapse.garden/api/v1",
  apiKey: process.env.MG_KEY,
  prompt: "What's the weather in Tokyo right now?",
  tools: {
    getWeather: tool({
      description: "Look up the current weather for a city",
      parameters: z.object({
        city: z.string().describe("City name, e.g. 'Tokyo'"),
        units: z.enum(["c", "f"]).default("c"),
      }),
      execute: async ({ city, units }) => {
        const res = await fetch(`https://wttr.in/${city}?format=j1&u=${units}`)
        const json = await res.json()
        return {
          temperatureC: json.current_condition[0].temp_C,
          description: json.current_condition[0].weatherDesc[0].value,
        }
      },
    }),
  },
})

for await (const part of result.fullStream) {
  if (part.type === "text-delta") process.stdout.write(part.textDelta)
  if (part.type === "tool-call") console.log("\n[calling]", part.toolName, part.args)
  if (part.type === "tool-result") console.log("[result]", part.result)
}

The model sees the tool definitions as a JSON schema, decides whether to call one, and the AI SDK runs your execute function automatically. The tool result feeds back into the conversation for the model's next turn.

Multi-step orchestration

Set maxSteps and the AI SDK will loop tool-call → result → next-step until either the model stops calling tools or you hit the cap.

const result = streamText({
  model: "openai/gpt-5.4",
  prompt: "Build me a 3-day Tokyo itinerary based on the weather forecast.",
  maxSteps: 5,
  tools: {
    getForecast: tool({
      description: "Get a multi-day weather forecast",
      parameters: z.object({ city: z.string(), days: z.number().int().min(1).max(14) }),
      execute: async ({ city, days }) => fetchForecast(city, days),
    }),
    searchAttractions: tool({
      description: "Search for tourist attractions",
      parameters: z.object({ city: z.string(), tag: z.string().optional() }),
      execute: async ({ city, tag }) => searchAttractions(city, tag),
    }),
  },
})

Each step is a separate request to the underlying provider — maxSteps is the safety bound on how many times we round-trip. Token usage adds up across steps; the final result.usage is the cumulative total.

One step is one request

The model sees prior tool calls and their results in its context on each subsequent step. Long multi-step traces can blow your context window — set maxSteps: 3 until you've measured the typical run length.

Parallel tool calls

Modern OpenAI models can request multiple tools in a single step. The AI SDK runs them concurrently and feeds the combined results to the next step:

// Model emits two tool calls in one turn:
{
  "tool_calls": [
    { "id": "call_1", "function": { "name": "getWeather", "arguments": "{\"city\":\"Tokyo\"}" } },
    { "id": "call_2", "function": { "name": "getWeather", "arguments": "{\"city\":\"Kyoto\"}" } }
  ]
}

You write the same tools map; the AI SDK handles the fan-out. If you'd rather force serial execution, set experimental_continueSteps: false (defaults to true).

Forcing a tool

Most of the time you let the model decide. Sometimes you need it to call something specific:

streamText({
  model: "openai/gpt-5.4",
  prompt: "Get me the weather.",
  toolChoice: "required",                // call SOMETHING
  // or:
  toolChoice: { type: "tool", toolName: "getWeather" }, // call THIS
  tools: { … },
})

toolChoice: "auto" (default) lets the model pick. "none" disables tools entirely.

Provider compatibility

Tool use is supported across the major providers, but the wire format differs:

ProviderTool field
OpenAI / xAI / DeepSeek / mosttools[] with JSON schema in function.parameters
Anthropictools[] with JSON schema in input_schema
Google Geminitools[] with function_declarations

The AI SDK normalizes all three. Going through Synapse Garden's OpenAI-compat surface (/v1/chat/completions) translates to the right shape on the upstream automatically — your code stays the same when you swap models.

Tool-use with reasoning models

OpenAI gpt-5.5* and o*, DeepSeek r2, Anthropic with extended thinking — all support tool use, but the model thinks first, then calls tools. You'll often see a long initial reasoning phase before any tool call appears in the stream. Plan your TTFT budget accordingly.

streamText({
  model: "openai/gpt-5.5",
  prompt: "Plan a multi-leg flight from SFO to Tokyo with two stopovers.",
  providerOptions: {
    openai: { reasoningEffort: "high", reasoningSummary: "auto" },
  },
  tools: { searchFlights: …, getAirportInfo: … },
  maxSteps: 8,
})

Capturing tool calls in your own logic

If you don't pass execute, the AI SDK leaves the call for you to run manually. Useful when the tool requires user confirmation or runs on a different machine:

const tools = {
  deleteCustomer: tool({
    description: "Delete a customer record",
    parameters: z.object({ customerId: z.string() }),
    // no execute — we'll handle it
  }),
}

const result = await generateText({
  model: "openai/gpt-5.4",
  prompt: "Delete customer 42, please.",
  tools,
  toolChoice: "required",
})

for (const call of result.toolCalls) {
  if (call.toolName === "deleteCustomer") {
    const confirmed = await askHumanForApproval(call.args.customerId)
    if (confirmed) await deleteCustomer(call.args.customerId)
  }
}

Errors in tools

If your execute throws, the AI SDK serializes the error and feeds it back to the model as a tool result. The model usually apologizes and tries a different approach. To force a hard fail, throw a special class or check for it in result.toolResults:

class FatalToolError extends Error {}

execute: async (args) => {
  if (!isAuthorized(args.userId)) throw new FatalToolError("Unauthorized")
  return await doWork(args)
}

// then check after the run:
for (const r of result.toolResults) {
  if (r.result instanceof FatalToolError) throw r.result
}

Pricing for tool calls

Tool definitions, calls, and results are all billed as input tokens on the next step. A multi-step run with three tool calls effectively pays for the same conversation context three times — usually still cheaper than running multiple independent calls because of provider-side prefix caching (see Caching).