Skip to content

Tool Use & Function Calling

Protocols intermediate 15 min
Sources verified Dec 22

Tool use enables LLMs to interact with external systems by generating structured function calls that applications execute and return results for.

Tool use (also called function calling) is a protocol that allows LLMs to request execution of external functions. The model doesn't execute code itself—instead, it generates a structured JSON request describing which function to call and with what arguments. Your application then executes the function and returns the result to the model, which can incorporate it into its response or make additional tool calls.

How Tool Use Works

The tool use cycle follows this pattern:

  1. Tool Definition: You provide the model with a schema describing available tools (name, description, parameter types)
  2. Model Decision: The model analyzes the user's request and decides whether to call a tool
  3. Tool Call Generation: If needed, the model outputs a structured tool call request (JSON)
  4. Application Execution: Your code executes the actual function with the provided arguments
  5. Result Return: You send the function result back to the model
  6. Response Generation: The model uses the result to formulate its final answer

This cycle can repeat multiple times—the model might call several tools in sequence or parallel to complete a task.

// OpenAI Tool Format
{
  "type": "function",
  "function": {
    "name": "get_weather",
    "description": "Get current weather for a location",
    "parameters": {
      "type": "object",
      "properties": {
        "location": {
          "type": "string",
          "description": "City name, e.g., San Francisco, CA"
        },
        "unit": {
          "type": "string",
          "enum": ["celsius", "fahrenheit"],
          "description": "Temperature unit"
        }
      },
      "required": ["location"]
    }
  }
}
// Anthropic Tool Format
{
  "name": "get_weather",
  "description": "Get current weather for a location",
  "input_schema": {
    "type": "object",
    "properties": {
      "location": {
        "type": "string",
        "description": "City name, e.g., San Francisco, CA"
      },
      "unit": {
        "type": "string",
        "enum": ["celsius", "fahrenheit"],
        "description": "Temperature unit"
      }
    },
    "required": ["location"]
  }
}

Model Response with Tool Call

When the model decides to use a tool, it returns a structured response indicating the tool call:

// OpenAI Response Format
{
  "role": "assistant",
  "content": null,
  "tool_calls": [
    {
      "id": "call_abc123",
      "type": "function",
      "function": {
        "name": "get_weather",
        "arguments": "{\"location\": \"San Francisco, CA\", \"unit\": \"fahrenheit\"}"
      }
    }
  ]
}
// Anthropic Response Format
{
  "role": "assistant",
  "content": [
    {
      "type": "tool_use",
      "id": "toolu_abc123",
      "name": "get_weather",
      "input": {
        "location": "San Francisco, CA",
        "unit": "fahrenheit"
      }
    }
  ]
}

Returning Results

After executing the function, you return the result in the next message:

// OpenAI Tool Result
{
  "role": "tool",
  "tool_call_id": "call_abc123",
  "content": "{\"temperature\": 72, \"conditions\": \"sunny\", \"humidity\": 65}"
}

// Anthropic Tool Result
{
  "role": "user",
  "content": [
    {
      "type": "tool_result",
      "tool_use_id": "toolu_abc123",
      "content": "{\"temperature\": 72, \"conditions\": \"sunny\", \"humidity\": 65}"
    }
  ]
}

Key Differences: OpenAI vs Anthropic

OpenAI:

  • Tools nested under function object
  • Parameters in parameters field
  • Tool calls have separate IDs
  • Results use tool role

Anthropic:

  • Flat tool structure
  • Parameters in input_schema field
  • Tool use embedded in content blocks
  • Results use user role with tool_result type

MCP (Model Context Protocol) provides a standardized format that works across both providers—see the MCP concept for details.

Key Takeaways

  • Tool use allows LLMs to request external function execution via structured JSON
  • The model generates tool calls; your application executes them and returns results
  • OpenAI and Anthropic use different but functionally equivalent formats
  • Tool definitions use JSON Schema to describe parameters and types
  • The request-response cycle can repeat for multi-step tasks
  • Good tool descriptions are critical—they guide the model's decision to use the tool

In This Platform

This platform will use tool calling to enable AI assistants to query the assessment database, retrieve personalized recommendations, and search source citations. The survey schema in schema/survey.schema.json defines structured outputs that could be enhanced with tool definitions.

Relevant Files:
  • schema/survey.schema.json
  • recommendations/single_dimension_rules.json
  • recommendations/combination_rules.json

Prerequisites

Sources

Tempered AI Forged Through Practice, Not Hype

Keyboard Shortcuts

j
Next page
k
Previous page
h
Section home
/
Search
?
Show shortcuts
m
Toggle sidebar
Esc
Close modal
Shift+R
Reset all progress
? Keyboard shortcuts