Skip to content

Build an LLM Tool By Hand

Build intermediate 30 min typescript
Sources not yet verified

Implement a tool from scratch without using a framework, to deeply understand how tool calling works under the hood.

1. Understand the Scenario

You're building a simple weather assistant. Instead of using LangChain or a framework, you'll implement the tool calling loop yourself to understand exactly how it works.

Learning Objectives

  • Understand the tool calling protocol
  • Implement a tool definition matching OpenAI's format
  • Handle tool execution and result injection
  • Complete the assistant loop

2. Follow the Instructions

What You'll Build

A weather assistant that can answer questions like "What's the weather in Tokyo?" by:

  1. Receiving the user's question
  2. Deciding to call a get_weather tool
  3. Executing the tool and getting results
  4. Generating a natural language response

Step 1: Define Your Tool

Tools are defined using JSON Schema. The model uses this schema to understand what the tool does and what parameters it accepts.

// Define the tool in OpenAI's format
const tools = [
  {
    type: 'function',
    function: {
      name: 'get_weather',
      description: 'Get the current weather for a location',
      parameters: {
        type: 'object',
        properties: {
          location: {
            type: 'string',
            description: 'City name, e.g., "Tokyo" or "San Francisco"'
          },
          units: {
            type: 'string',
            enum: ['celsius', 'fahrenheit'],
            description: 'Temperature units'
          }
        },
        required: ['location']
      }
    }
  }
];

Step 2: Implement the Tool Function

This is the actual code that runs when the model calls the tool.

// Mock implementation - in production, call a real weather API
function getWeather(location: string, units: string = 'celsius'): string {
  // Simulate weather data
  const weather = {
    location,
    temperature: units === 'celsius' ? 22 : 72,
    units,
    condition: 'Partly cloudy',
    humidity: 65
  };
  return JSON.stringify(weather);
}

Step 3: Build the Conversation Loop

This is the core logic. You'll:

  1. Send the user message with tools defined
  2. Check if the model wants to call a tool
  3. Execute the tool and send results back
  4. Get the final response

Your Task: Complete the handleToolCalls function in the starter code below.

3. Try It Yourself

starter_code.ts
/**
 * Key Points:
 * - Line ~37: Parse the JSON arguments the model provided
 * - Line ~53: The model's message includes tool_calls array
 * - Line ~56: IMPORTANT: Include the assistant message with tool_calls
 * - Line ~59: Add tool results to continue the conversation
 */
import OpenAI from 'openai';

const openai = new OpenAI();

// Tool definitions
const tools: OpenAI.ChatCompletionTool[] = [
  {
    type: 'function',
    function: {
      name: 'get_weather',
      description: 'Get the current weather for a location',
      parameters: {
        type: 'object',
        properties: {
          location: {
            type: 'string',
            description: 'City name, e.g., "Tokyo"'
          },
          units: {
            type: 'string',
            enum: ['celsius', 'fahrenheit']
          }
        },
        required: ['location']
      }
    }
  }
];

// Tool implementation
function getWeather(location: string, units = 'celsius'): string {
  return JSON.stringify({
    location,
    temperature: units === 'celsius' ? 22 : 72,
    units,
    condition: 'Partly cloudy'
  });
}

// Execute tools and return results
function executeTools(toolCalls: OpenAI.ChatCompletionMessageToolCall[]) {
  // TODO: Filter for function calls, parse arguments, execute the tool, return with tool_call_id
  throw new Error('Not implemented');
}

async function chat(userMessage: string): Promise<string> {
  const messages: OpenAI.ChatCompletionMessageParam[] = [
    { role: 'system', content: 'You are a helpful weather assistant.' },
    { role: 'user', content: userMessage }
  ];

  // First API call
  const response = await openai.chat.completions.create({
    model: 'gpt-4o',
    messages,
    tools
  });

  const assistantMessage = response.choices[0].message;

  // TODO: Check for tool_calls, execute them, add results to messages, make second API call
  throw new Error('Not implemented');
}

// Test
chat('What is the weather like in Tokyo?').then(console.log);
// Output: "The weather in Tokyo is currently 22°C and partly cloudy."

This typescript exercise requires local setup. Copy the code to your IDE to run.

4. Get Help (If Needed)

Reveal progressive hints
Hint 1: Check if `assistantMessage.tool_calls` exists and has items before trying to execute tools.
Hint 2: When you execute a tool, you need to add the result as a message with `role: 'tool'` and include the `tool_call_id`.
Hint 3: The conversation flow is: user message -> assistant with tool_calls -> tool results -> assistant final response. You need to push all these to the messages array.

5. Check the Solution

Reveal the complete solution
solution.ts
/**
 * Key Points:
 * - Line ~37: Parse the JSON arguments the model provided
 * - Line ~53: The model's message includes tool_calls array
 * - Line ~56: IMPORTANT: Include the assistant message with tool_calls
 * - Line ~59: Add tool results to continue the conversation
 */
import OpenAI from 'openai';

const openai = new OpenAI();

// Tool definitions
const tools: OpenAI.ChatCompletionTool[] = [
  {
    type: 'function',
    function: {
      name: 'get_weather',
      description: 'Get the current weather for a location',
      parameters: {
        type: 'object',
        properties: {
          location: {
            type: 'string',
            description: 'City name, e.g., "Tokyo"'
          },
          units: {
            type: 'string',
            enum: ['celsius', 'fahrenheit']
          }
        },
        required: ['location']
      }
    }
  }
];

// Tool implementation
function getWeather(location: string, units = 'celsius'): string {
  return JSON.stringify({
    location,
    temperature: units === 'celsius' ? 22 : 72,
    units,
    condition: 'Partly cloudy'
  });
}

// Execute tools and return results
function executeTools(toolCalls: OpenAI.ChatCompletionMessageToolCall[]) {
  // SOLUTION_START hint="Filter for function calls, parse arguments, execute the tool, return with tool_call_id"
  return toolCalls
    .filter((call): call is OpenAI.ChatCompletionMessageToolCall & { type: 'function' } =>
      call.type === 'function'
    )
    .map(call => {
      const args = JSON.parse(call.function.arguments);

      if (call.function.name === 'get_weather') {
        return {
          tool_call_id: call.id,
          role: 'tool' as const,
          content: getWeather(args.location, args.units)
        };
      }

      return {
        tool_call_id: call.id,
        role: 'tool' as const,
        content: JSON.stringify({ error: 'Unknown tool' })
      };
    });
  // SOLUTION_END
}

async function chat(userMessage: string): Promise<string> {
  const messages: OpenAI.ChatCompletionMessageParam[] = [
    { role: 'system', content: 'You are a helpful weather assistant.' },
    { role: 'user', content: userMessage }
  ];

  // First API call
  const response = await openai.chat.completions.create({
    model: 'gpt-4o',
    messages,
    tools
  });

  const assistantMessage = response.choices[0].message;

  // SOLUTION_START hint="Check for tool_calls, execute them, add results to messages, make second API call"
  // Check for tool calls
  if (assistantMessage.tool_calls && assistantMessage.tool_calls.length > 0) {
    // Add assistant's message (with tool_calls)
    messages.push(assistantMessage);

    // Execute tools and add results
    const toolResults = executeTools(assistantMessage.tool_calls);
    messages.push(...toolResults);

    // Second API call with tool results
    const finalResponse = await openai.chat.completions.create({
      model: 'gpt-4o',
      messages,
      tools
    });

    return finalResponse.choices[0].message.content || '';
  }

  return assistantMessage.content || '';
  // SOLUTION_END
}

// Test
chat('What is the weather like in Tokyo?').then(console.log);
// Output: "The weather in Tokyo is currently 22°C and partly cloudy."
L37: Parse the JSON arguments the model provided
L53: The model's message includes tool_calls array
L56: IMPORTANT: Include the assistant message with tool_calls
L59: Add tool results to continue the conversation

Common Mistakes

Forgetting to add the assistant message with tool_calls before adding tool results

Why it's wrong: The API expects the conversation to include the assistant's tool_calls message before the tool results. Without it, the context is broken.

How to fix: Always push assistantMessage to messages before pushing tool results.

Not including tool_call_id in tool result messages

Why it's wrong: The API needs to match each tool result with its corresponding tool call.

How to fix: Include `tool_call_id: call.id` in each tool result message.

Test Cases

Tool is called for weather question

When asking about weather, the model should call get_weather

Input: What's the weather in Paris?
Expected: Tool call to get_weather with location=Paris

Direct response for non-weather question

For questions not needing weather, respond directly

Input: What is 2+2?
Expected: Direct text response without tool calls

Sources

Tempered AI Forged Through Practice, Not Hype

Keyboard Shortcuts

j
Next page
k
Previous page
h
Section home
/
Search
?
Show shortcuts
m
Toggle sidebar
Esc
Close modal
Shift+R
Reset all progress
? Keyboard shortcuts